00:00:00.001 Started by upstream project "autotest-per-patch" build number 132704 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.095 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:06.506 The recommended git tool is: git 00:00:06.506 using credential 00000000-0000-0000-0000-000000000002 00:00:06.508 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:06.523 Fetching changes from the remote Git repository 00:00:06.526 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:06.537 Using shallow fetch with depth 1 00:00:06.537 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:06.537 > git --version # timeout=10 00:00:06.549 > git --version # 'git version 2.39.2' 00:00:06.549 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:06.562 Setting http proxy: proxy-dmz.intel.com:911 00:00:06.562 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:11.891 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:11.903 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:11.916 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:11.916 > git config core.sparsecheckout # timeout=10 00:00:11.928 > git read-tree -mu HEAD # timeout=10 00:00:11.945 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:11.977 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:11.977 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:12.108 [Pipeline] Start of Pipeline 00:00:12.123 [Pipeline] library 00:00:12.125 Loading library shm_lib@master 00:00:12.125 Library shm_lib@master is cached. Copying from home. 00:00:12.141 [Pipeline] node 00:00:12.150 Running on VM-host-SM17 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2 00:00:12.152 [Pipeline] { 00:00:12.162 [Pipeline] catchError 00:00:12.164 [Pipeline] { 00:00:12.175 [Pipeline] wrap 00:00:12.184 [Pipeline] { 00:00:12.191 [Pipeline] stage 00:00:12.193 [Pipeline] { (Prologue) 00:00:12.208 [Pipeline] echo 00:00:12.209 Node: VM-host-SM17 00:00:12.214 [Pipeline] cleanWs 00:00:12.221 [WS-CLEANUP] Deleting project workspace... 00:00:12.221 [WS-CLEANUP] Deferred wipeout is used... 00:00:12.227 [WS-CLEANUP] done 00:00:12.402 [Pipeline] setCustomBuildProperty 00:00:12.490 [Pipeline] httpRequest 00:00:12.851 [Pipeline] echo 00:00:12.853 Sorcerer 10.211.164.20 is alive 00:00:12.862 [Pipeline] retry 00:00:12.864 [Pipeline] { 00:00:12.876 [Pipeline] httpRequest 00:00:12.881 HttpMethod: GET 00:00:12.881 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:12.881 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:12.895 Response Code: HTTP/1.1 200 OK 00:00:12.895 Success: Status code 200 is in the accepted range: 200,404 00:00:12.896 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:16.481 [Pipeline] } 00:00:16.497 [Pipeline] // retry 00:00:16.504 [Pipeline] sh 00:00:16.785 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:16.800 [Pipeline] httpRequest 00:00:17.151 [Pipeline] echo 00:00:17.152 Sorcerer 10.211.164.20 is alive 00:00:17.163 [Pipeline] retry 00:00:17.165 [Pipeline] { 00:00:17.180 [Pipeline] httpRequest 00:00:17.184 HttpMethod: GET 00:00:17.185 URL: http://10.211.164.20/packages/spdk_2c140f58ffe19fb26bb9d25f4df8ac7937a32557.tar.gz 00:00:17.185 Sending request to url: http://10.211.164.20/packages/spdk_2c140f58ffe19fb26bb9d25f4df8ac7937a32557.tar.gz 00:00:17.190 Response Code: HTTP/1.1 200 OK 00:00:17.190 Success: Status code 200 is in the accepted range: 200,404 00:00:17.191 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/spdk_2c140f58ffe19fb26bb9d25f4df8ac7937a32557.tar.gz 00:03:17.203 [Pipeline] } 00:03:17.221 [Pipeline] // retry 00:03:17.229 [Pipeline] sh 00:03:17.510 + tar --no-same-owner -xf spdk_2c140f58ffe19fb26bb9d25f4df8ac7937a32557.tar.gz 00:03:20.818 [Pipeline] sh 00:03:21.212 + git -C spdk log --oneline -n5 00:03:21.212 2c140f58f nvme/rdma: Support accel sequence 00:03:21.212 8d3947977 spdk_dd: simplify `io_uring_peek_cqe` return code processing 00:03:21.212 77ee034c7 bdev/nvme: Add lock to unprotected operations around attach controller 00:03:21.212 48454bb28 bdev/nvme: Add lock to unprotected operations around detach controller 00:03:21.212 4b59d7893 bdev/nvme: Use nbdev always for local nvme_bdev pointer variables 00:03:21.232 [Pipeline] writeFile 00:03:21.247 [Pipeline] sh 00:03:21.528 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:03:21.540 [Pipeline] sh 00:03:21.822 + cat autorun-spdk.conf 00:03:21.822 SPDK_RUN_FUNCTIONAL_TEST=1 00:03:21.822 SPDK_TEST_NVMF=1 00:03:21.822 SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:21.822 SPDK_TEST_URING=1 00:03:21.822 SPDK_TEST_USDT=1 00:03:21.822 SPDK_RUN_UBSAN=1 00:03:21.822 NET_TYPE=virt 00:03:21.822 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:21.826 RUN_NIGHTLY=0 00:03:21.827 [Pipeline] } 00:03:21.836 [Pipeline] // stage 00:03:21.846 [Pipeline] stage 00:03:21.848 [Pipeline] { (Run VM) 00:03:21.857 [Pipeline] sh 00:03:22.134 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:03:22.134 + echo 'Start stage prepare_nvme.sh' 00:03:22.134 Start stage prepare_nvme.sh 00:03:22.134 + [[ -n 2 ]] 00:03:22.134 + disk_prefix=ex2 00:03:22.134 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2 ]] 00:03:22.134 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/autorun-spdk.conf ]] 00:03:22.134 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/autorun-spdk.conf 00:03:22.134 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:22.134 ++ SPDK_TEST_NVMF=1 00:03:22.134 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:22.134 ++ SPDK_TEST_URING=1 00:03:22.134 ++ SPDK_TEST_USDT=1 00:03:22.134 ++ SPDK_RUN_UBSAN=1 00:03:22.134 ++ NET_TYPE=virt 00:03:22.134 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:22.134 ++ RUN_NIGHTLY=0 00:03:22.134 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2 00:03:22.134 + nvme_files=() 00:03:22.134 + declare -A nvme_files 00:03:22.134 + backend_dir=/var/lib/libvirt/images/backends 00:03:22.134 + nvme_files['nvme.img']=5G 00:03:22.134 + nvme_files['nvme-cmb.img']=5G 00:03:22.134 + nvme_files['nvme-multi0.img']=4G 00:03:22.134 + nvme_files['nvme-multi1.img']=4G 00:03:22.134 + nvme_files['nvme-multi2.img']=4G 00:03:22.134 + nvme_files['nvme-openstack.img']=8G 00:03:22.134 + nvme_files['nvme-zns.img']=5G 00:03:22.134 + (( SPDK_TEST_NVME_PMR == 1 )) 00:03:22.134 + (( SPDK_TEST_FTL == 1 )) 00:03:22.134 + (( SPDK_TEST_NVME_FDP == 1 )) 00:03:22.134 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:03:22.134 + for nvme in "${!nvme_files[@]}" 00:03:22.134 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi2.img -s 4G 00:03:22.134 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:03:22.134 + for nvme in "${!nvme_files[@]}" 00:03:22.134 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-cmb.img -s 5G 00:03:22.134 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:03:22.134 + for nvme in "${!nvme_files[@]}" 00:03:22.134 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-openstack.img -s 8G 00:03:22.134 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:03:22.134 + for nvme in "${!nvme_files[@]}" 00:03:22.134 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-zns.img -s 5G 00:03:22.134 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:03:22.134 + for nvme in "${!nvme_files[@]}" 00:03:22.134 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi1.img -s 4G 00:03:22.134 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:03:22.134 + for nvme in "${!nvme_files[@]}" 00:03:22.134 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi0.img -s 4G 00:03:22.134 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:03:22.134 + for nvme in "${!nvme_files[@]}" 00:03:22.134 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme.img -s 5G 00:03:22.393 Formatting '/var/lib/libvirt/images/backends/ex2-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:03:22.393 ++ sudo grep -rl ex2-nvme.img /etc/libvirt/qemu 00:03:22.393 + echo 'End stage prepare_nvme.sh' 00:03:22.393 End stage prepare_nvme.sh 00:03:22.405 [Pipeline] sh 00:03:22.685 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:03:22.685 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex2-nvme.img -b /var/lib/libvirt/images/backends/ex2-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img -H -a -v -f fedora39 00:03:22.685 00:03:22.685 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/spdk/scripts/vagrant 00:03:22.685 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/spdk 00:03:22.685 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2 00:03:22.685 HELP=0 00:03:22.685 DRY_RUN=0 00:03:22.685 NVME_FILE=/var/lib/libvirt/images/backends/ex2-nvme.img,/var/lib/libvirt/images/backends/ex2-nvme-multi0.img, 00:03:22.685 NVME_DISKS_TYPE=nvme,nvme, 00:03:22.685 NVME_AUTO_CREATE=0 00:03:22.685 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img, 00:03:22.685 NVME_CMB=,, 00:03:22.685 NVME_PMR=,, 00:03:22.685 NVME_ZNS=,, 00:03:22.685 NVME_MS=,, 00:03:22.685 NVME_FDP=,, 00:03:22.685 SPDK_VAGRANT_DISTRO=fedora39 00:03:22.685 SPDK_VAGRANT_VMCPU=10 00:03:22.685 SPDK_VAGRANT_VMRAM=12288 00:03:22.685 SPDK_VAGRANT_PROVIDER=libvirt 00:03:22.685 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:03:22.685 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:03:22.685 SPDK_OPENSTACK_NETWORK=0 00:03:22.685 VAGRANT_PACKAGE_BOX=0 00:03:22.685 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/spdk/scripts/vagrant/Vagrantfile 00:03:22.685 FORCE_DISTRO=true 00:03:22.685 VAGRANT_BOX_VERSION= 00:03:22.685 EXTRA_VAGRANTFILES= 00:03:22.685 NIC_MODEL=e1000 00:03:22.685 00:03:22.685 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/fedora39-libvirt' 00:03:22.685 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2 00:03:25.970 Bringing machine 'default' up with 'libvirt' provider... 00:03:26.228 ==> default: Creating image (snapshot of base box volume). 00:03:26.486 ==> default: Creating domain with the following settings... 00:03:26.486 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1733426599_8c8e43d4daa196a92693 00:03:26.486 ==> default: -- Domain type: kvm 00:03:26.486 ==> default: -- Cpus: 10 00:03:26.486 ==> default: -- Feature: acpi 00:03:26.486 ==> default: -- Feature: apic 00:03:26.486 ==> default: -- Feature: pae 00:03:26.486 ==> default: -- Memory: 12288M 00:03:26.486 ==> default: -- Memory Backing: hugepages: 00:03:26.486 ==> default: -- Management MAC: 00:03:26.486 ==> default: -- Loader: 00:03:26.486 ==> default: -- Nvram: 00:03:26.486 ==> default: -- Base box: spdk/fedora39 00:03:26.486 ==> default: -- Storage pool: default 00:03:26.486 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1733426599_8c8e43d4daa196a92693.img (20G) 00:03:26.486 ==> default: -- Volume Cache: default 00:03:26.486 ==> default: -- Kernel: 00:03:26.486 ==> default: -- Initrd: 00:03:26.486 ==> default: -- Graphics Type: vnc 00:03:26.486 ==> default: -- Graphics Port: -1 00:03:26.486 ==> default: -- Graphics IP: 127.0.0.1 00:03:26.487 ==> default: -- Graphics Password: Not defined 00:03:26.487 ==> default: -- Video Type: cirrus 00:03:26.487 ==> default: -- Video VRAM: 9216 00:03:26.487 ==> default: -- Sound Type: 00:03:26.487 ==> default: -- Keymap: en-us 00:03:26.487 ==> default: -- TPM Path: 00:03:26.487 ==> default: -- INPUT: type=mouse, bus=ps2 00:03:26.487 ==> default: -- Command line args: 00:03:26.487 ==> default: -> value=-device, 00:03:26.487 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:03:26.487 ==> default: -> value=-drive, 00:03:26.487 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme.img,if=none,id=nvme-0-drive0, 00:03:26.487 ==> default: -> value=-device, 00:03:26.487 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:26.487 ==> default: -> value=-device, 00:03:26.487 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:03:26.487 ==> default: -> value=-drive, 00:03:26.487 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:03:26.487 ==> default: -> value=-device, 00:03:26.487 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:26.487 ==> default: -> value=-drive, 00:03:26.487 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:03:26.487 ==> default: -> value=-device, 00:03:26.487 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:26.487 ==> default: -> value=-drive, 00:03:26.487 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:03:26.487 ==> default: -> value=-device, 00:03:26.487 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:26.487 ==> default: Creating shared folders metadata... 00:03:26.487 ==> default: Starting domain. 00:03:28.387 ==> default: Waiting for domain to get an IP address... 00:03:46.472 ==> default: Waiting for SSH to become available... 00:03:46.472 ==> default: Configuring and enabling network interfaces... 00:03:49.006 default: SSH address: 192.168.121.246:22 00:03:49.007 default: SSH username: vagrant 00:03:49.007 default: SSH auth method: private key 00:03:51.616 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/spdk/ => /home/vagrant/spdk_repo/spdk 00:03:58.180 ==> default: Mounting SSHFS shared folder... 00:04:00.105 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:04:00.105 ==> default: Checking Mount.. 00:04:01.479 ==> default: Folder Successfully Mounted! 00:04:01.479 ==> default: Running provisioner: file... 00:04:02.413 default: ~/.gitconfig => .gitconfig 00:04:02.672 00:04:02.672 SUCCESS! 00:04:02.672 00:04:02.672 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/fedora39-libvirt and type "vagrant ssh" to use. 00:04:02.672 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:04:02.672 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/fedora39-libvirt" to destroy all trace of vm. 00:04:02.672 00:04:02.681 [Pipeline] } 00:04:02.691 [Pipeline] // stage 00:04:02.698 [Pipeline] dir 00:04:02.698 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/fedora39-libvirt 00:04:02.699 [Pipeline] { 00:04:02.710 [Pipeline] catchError 00:04:02.712 [Pipeline] { 00:04:02.725 [Pipeline] sh 00:04:03.006 + vagrant ssh-config --host vagrant 00:04:03.006 + sed -ne /^Host/,$p 00:04:03.006 + tee ssh_conf 00:04:07.202 Host vagrant 00:04:07.202 HostName 192.168.121.246 00:04:07.202 User vagrant 00:04:07.202 Port 22 00:04:07.202 UserKnownHostsFile /dev/null 00:04:07.202 StrictHostKeyChecking no 00:04:07.202 PasswordAuthentication no 00:04:07.202 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:04:07.202 IdentitiesOnly yes 00:04:07.202 LogLevel FATAL 00:04:07.202 ForwardAgent yes 00:04:07.202 ForwardX11 yes 00:04:07.202 00:04:07.216 [Pipeline] withEnv 00:04:07.219 [Pipeline] { 00:04:07.234 [Pipeline] sh 00:04:07.516 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:04:07.517 source /etc/os-release 00:04:07.517 [[ -e /image.version ]] && img=$(< /image.version) 00:04:07.517 # Minimal, systemd-like check. 00:04:07.517 if [[ -e /.dockerenv ]]; then 00:04:07.517 # Clear garbage from the node's name: 00:04:07.517 # agt-er_autotest_547-896 -> autotest_547-896 00:04:07.517 # $HOSTNAME is the actual container id 00:04:07.517 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:04:07.517 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:04:07.517 # We can assume this is a mount from a host where container is running, 00:04:07.517 # so fetch its hostname to easily identify the target swarm worker. 00:04:07.517 container="$(< /etc/hostname) ($agent)" 00:04:07.517 else 00:04:07.517 # Fallback 00:04:07.517 container=$agent 00:04:07.517 fi 00:04:07.517 fi 00:04:07.517 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:04:07.517 00:04:07.789 [Pipeline] } 00:04:07.805 [Pipeline] // withEnv 00:04:07.813 [Pipeline] setCustomBuildProperty 00:04:07.829 [Pipeline] stage 00:04:07.831 [Pipeline] { (Tests) 00:04:07.849 [Pipeline] sh 00:04:08.129 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:04:08.143 [Pipeline] sh 00:04:08.422 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:04:08.692 [Pipeline] timeout 00:04:08.692 Timeout set to expire in 1 hr 0 min 00:04:08.694 [Pipeline] { 00:04:08.707 [Pipeline] sh 00:04:08.984 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:04:09.551 HEAD is now at 2c140f58f nvme/rdma: Support accel sequence 00:04:09.564 [Pipeline] sh 00:04:09.844 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:04:10.118 [Pipeline] sh 00:04:10.403 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:04:10.680 [Pipeline] sh 00:04:10.959 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:04:11.224 ++ readlink -f spdk_repo 00:04:11.224 + DIR_ROOT=/home/vagrant/spdk_repo 00:04:11.224 + [[ -n /home/vagrant/spdk_repo ]] 00:04:11.224 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:04:11.225 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:04:11.225 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:04:11.225 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:04:11.225 + [[ -d /home/vagrant/spdk_repo/output ]] 00:04:11.225 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:04:11.225 + cd /home/vagrant/spdk_repo 00:04:11.225 + source /etc/os-release 00:04:11.225 ++ NAME='Fedora Linux' 00:04:11.225 ++ VERSION='39 (Cloud Edition)' 00:04:11.225 ++ ID=fedora 00:04:11.225 ++ VERSION_ID=39 00:04:11.225 ++ VERSION_CODENAME= 00:04:11.225 ++ PLATFORM_ID=platform:f39 00:04:11.225 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:04:11.225 ++ ANSI_COLOR='0;38;2;60;110;180' 00:04:11.225 ++ LOGO=fedora-logo-icon 00:04:11.225 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:04:11.225 ++ HOME_URL=https://fedoraproject.org/ 00:04:11.225 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:04:11.225 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:04:11.225 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:04:11.225 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:04:11.225 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:04:11.225 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:04:11.225 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:04:11.225 ++ SUPPORT_END=2024-11-12 00:04:11.225 ++ VARIANT='Cloud Edition' 00:04:11.225 ++ VARIANT_ID=cloud 00:04:11.225 + uname -a 00:04:11.225 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:04:11.225 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:11.502 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:11.502 Hugepages 00:04:11.502 node hugesize free / total 00:04:11.502 node0 1048576kB 0 / 0 00:04:11.502 node0 2048kB 0 / 0 00:04:11.502 00:04:11.502 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:11.760 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:11.760 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:11.760 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:04:11.760 + rm -f /tmp/spdk-ld-path 00:04:11.760 + source autorun-spdk.conf 00:04:11.760 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:04:11.760 ++ SPDK_TEST_NVMF=1 00:04:11.760 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:04:11.760 ++ SPDK_TEST_URING=1 00:04:11.760 ++ SPDK_TEST_USDT=1 00:04:11.760 ++ SPDK_RUN_UBSAN=1 00:04:11.760 ++ NET_TYPE=virt 00:04:11.760 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:04:11.760 ++ RUN_NIGHTLY=0 00:04:11.760 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:04:11.760 + [[ -n '' ]] 00:04:11.760 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:04:11.760 + for M in /var/spdk/build-*-manifest.txt 00:04:11.760 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:04:11.760 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:04:11.760 + for M in /var/spdk/build-*-manifest.txt 00:04:11.760 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:04:11.760 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:04:11.760 + for M in /var/spdk/build-*-manifest.txt 00:04:11.760 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:04:11.760 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:04:11.760 ++ uname 00:04:11.760 + [[ Linux == \L\i\n\u\x ]] 00:04:11.760 + sudo dmesg -T 00:04:11.760 + sudo dmesg --clear 00:04:11.760 + dmesg_pid=5195 00:04:11.760 + sudo dmesg -Tw 00:04:11.760 + [[ Fedora Linux == FreeBSD ]] 00:04:11.760 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:04:11.760 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:04:11.760 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:04:11.760 + [[ -x /usr/src/fio-static/fio ]] 00:04:11.760 + export FIO_BIN=/usr/src/fio-static/fio 00:04:11.760 + FIO_BIN=/usr/src/fio-static/fio 00:04:11.760 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:04:11.760 + [[ ! -v VFIO_QEMU_BIN ]] 00:04:11.760 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:04:11.760 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:04:11.760 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:04:11.760 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:04:11.760 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:04:11.760 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:04:11.760 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:12.019 19:24:05 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:04:12.019 19:24:05 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:12.019 19:24:05 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:04:12.019 19:24:05 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:04:12.019 19:24:05 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:04:12.019 19:24:05 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_URING=1 00:04:12.019 19:24:05 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_TEST_USDT=1 00:04:12.019 19:24:05 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:04:12.019 19:24:05 -- spdk_repo/autorun-spdk.conf@7 -- $ NET_TYPE=virt 00:04:12.019 19:24:05 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:04:12.019 19:24:05 -- spdk_repo/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:04:12.019 19:24:05 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:04:12.019 19:24:05 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:12.019 19:24:05 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:04:12.019 19:24:05 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:12.019 19:24:05 -- scripts/common.sh@15 -- $ shopt -s extglob 00:04:12.019 19:24:05 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:04:12.019 19:24:05 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:12.019 19:24:05 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:12.019 19:24:05 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:12.019 19:24:05 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:12.019 19:24:05 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:12.019 19:24:05 -- paths/export.sh@5 -- $ export PATH 00:04:12.019 19:24:05 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:12.019 19:24:05 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:04:12.019 19:24:05 -- common/autobuild_common.sh@493 -- $ date +%s 00:04:12.019 19:24:05 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733426645.XXXXXX 00:04:12.019 19:24:05 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733426645.Sq02bo 00:04:12.019 19:24:05 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:04:12.019 19:24:05 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:04:12.019 19:24:05 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:04:12.019 19:24:05 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:04:12.019 19:24:05 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:04:12.019 19:24:05 -- common/autobuild_common.sh@509 -- $ get_config_params 00:04:12.019 19:24:05 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:04:12.019 19:24:05 -- common/autotest_common.sh@10 -- $ set +x 00:04:12.019 19:24:05 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring' 00:04:12.019 19:24:05 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:04:12.019 19:24:05 -- pm/common@17 -- $ local monitor 00:04:12.019 19:24:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:12.019 19:24:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:12.019 19:24:05 -- pm/common@25 -- $ sleep 1 00:04:12.019 19:24:05 -- pm/common@21 -- $ date +%s 00:04:12.019 19:24:05 -- pm/common@21 -- $ date +%s 00:04:12.019 19:24:05 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733426645 00:04:12.019 19:24:05 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733426645 00:04:12.019 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733426645_collect-vmstat.pm.log 00:04:12.019 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733426645_collect-cpu-load.pm.log 00:04:12.953 19:24:06 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:04:12.953 19:24:06 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:04:12.953 19:24:06 -- spdk/autobuild.sh@12 -- $ umask 022 00:04:12.953 19:24:06 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:04:12.953 19:24:06 -- spdk/autobuild.sh@16 -- $ date -u 00:04:12.953 Thu Dec 5 07:24:06 PM UTC 2024 00:04:12.953 19:24:06 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:04:12.953 v25.01-pre-297-g2c140f58f 00:04:12.953 19:24:06 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:04:12.953 19:24:06 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:04:12.953 19:24:06 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:04:12.953 19:24:06 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:04:12.953 19:24:06 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:04:12.953 19:24:06 -- common/autotest_common.sh@10 -- $ set +x 00:04:12.953 ************************************ 00:04:12.953 START TEST ubsan 00:04:12.953 ************************************ 00:04:12.953 using ubsan 00:04:12.953 19:24:06 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:04:12.953 00:04:12.953 real 0m0.000s 00:04:12.953 user 0m0.000s 00:04:12.953 sys 0m0.000s 00:04:12.953 19:24:06 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:12.953 19:24:06 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:04:12.953 ************************************ 00:04:12.953 END TEST ubsan 00:04:12.953 ************************************ 00:04:12.953 19:24:06 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:04:12.953 19:24:06 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:04:12.953 19:24:06 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:04:12.953 19:24:06 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:04:12.953 19:24:06 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:04:12.953 19:24:06 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:04:12.953 19:24:06 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:04:12.953 19:24:06 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:04:12.953 19:24:06 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-shared 00:04:13.211 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:04:13.211 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:04:13.779 Using 'verbs' RDMA provider 00:04:27.078 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:04:41.954 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:04:41.954 Creating mk/config.mk...done. 00:04:41.954 Creating mk/cc.flags.mk...done. 00:04:41.954 Type 'make' to build. 00:04:41.954 19:24:33 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:04:41.954 19:24:33 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:04:41.954 19:24:33 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:04:41.954 19:24:33 -- common/autotest_common.sh@10 -- $ set +x 00:04:41.954 ************************************ 00:04:41.954 START TEST make 00:04:41.954 ************************************ 00:04:41.954 19:24:33 make -- common/autotest_common.sh@1129 -- $ make -j10 00:04:41.954 make[1]: Nothing to be done for 'all'. 00:04:54.167 The Meson build system 00:04:54.167 Version: 1.5.0 00:04:54.167 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:04:54.167 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:04:54.167 Build type: native build 00:04:54.167 Program cat found: YES (/usr/bin/cat) 00:04:54.167 Project name: DPDK 00:04:54.167 Project version: 24.03.0 00:04:54.167 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:04:54.167 C linker for the host machine: cc ld.bfd 2.40-14 00:04:54.167 Host machine cpu family: x86_64 00:04:54.167 Host machine cpu: x86_64 00:04:54.167 Message: ## Building in Developer Mode ## 00:04:54.167 Program pkg-config found: YES (/usr/bin/pkg-config) 00:04:54.167 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:04:54.167 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:04:54.167 Program python3 found: YES (/usr/bin/python3) 00:04:54.167 Program cat found: YES (/usr/bin/cat) 00:04:54.167 Compiler for C supports arguments -march=native: YES 00:04:54.167 Checking for size of "void *" : 8 00:04:54.167 Checking for size of "void *" : 8 (cached) 00:04:54.167 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:04:54.167 Library m found: YES 00:04:54.167 Library numa found: YES 00:04:54.167 Has header "numaif.h" : YES 00:04:54.167 Library fdt found: NO 00:04:54.167 Library execinfo found: NO 00:04:54.167 Has header "execinfo.h" : YES 00:04:54.167 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:04:54.167 Run-time dependency libarchive found: NO (tried pkgconfig) 00:04:54.167 Run-time dependency libbsd found: NO (tried pkgconfig) 00:04:54.167 Run-time dependency jansson found: NO (tried pkgconfig) 00:04:54.167 Run-time dependency openssl found: YES 3.1.1 00:04:54.167 Run-time dependency libpcap found: YES 1.10.4 00:04:54.167 Has header "pcap.h" with dependency libpcap: YES 00:04:54.167 Compiler for C supports arguments -Wcast-qual: YES 00:04:54.167 Compiler for C supports arguments -Wdeprecated: YES 00:04:54.167 Compiler for C supports arguments -Wformat: YES 00:04:54.167 Compiler for C supports arguments -Wformat-nonliteral: NO 00:04:54.167 Compiler for C supports arguments -Wformat-security: NO 00:04:54.167 Compiler for C supports arguments -Wmissing-declarations: YES 00:04:54.167 Compiler for C supports arguments -Wmissing-prototypes: YES 00:04:54.167 Compiler for C supports arguments -Wnested-externs: YES 00:04:54.167 Compiler for C supports arguments -Wold-style-definition: YES 00:04:54.167 Compiler for C supports arguments -Wpointer-arith: YES 00:04:54.167 Compiler for C supports arguments -Wsign-compare: YES 00:04:54.167 Compiler for C supports arguments -Wstrict-prototypes: YES 00:04:54.167 Compiler for C supports arguments -Wundef: YES 00:04:54.167 Compiler for C supports arguments -Wwrite-strings: YES 00:04:54.167 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:04:54.167 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:04:54.167 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:04:54.167 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:04:54.167 Program objdump found: YES (/usr/bin/objdump) 00:04:54.167 Compiler for C supports arguments -mavx512f: YES 00:04:54.167 Checking if "AVX512 checking" compiles: YES 00:04:54.167 Fetching value of define "__SSE4_2__" : 1 00:04:54.167 Fetching value of define "__AES__" : 1 00:04:54.167 Fetching value of define "__AVX__" : 1 00:04:54.167 Fetching value of define "__AVX2__" : 1 00:04:54.167 Fetching value of define "__AVX512BW__" : (undefined) 00:04:54.167 Fetching value of define "__AVX512CD__" : (undefined) 00:04:54.167 Fetching value of define "__AVX512DQ__" : (undefined) 00:04:54.167 Fetching value of define "__AVX512F__" : (undefined) 00:04:54.167 Fetching value of define "__AVX512VL__" : (undefined) 00:04:54.167 Fetching value of define "__PCLMUL__" : 1 00:04:54.167 Fetching value of define "__RDRND__" : 1 00:04:54.167 Fetching value of define "__RDSEED__" : 1 00:04:54.167 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:04:54.167 Fetching value of define "__znver1__" : (undefined) 00:04:54.167 Fetching value of define "__znver2__" : (undefined) 00:04:54.167 Fetching value of define "__znver3__" : (undefined) 00:04:54.168 Fetching value of define "__znver4__" : (undefined) 00:04:54.168 Compiler for C supports arguments -Wno-format-truncation: YES 00:04:54.168 Message: lib/log: Defining dependency "log" 00:04:54.168 Message: lib/kvargs: Defining dependency "kvargs" 00:04:54.168 Message: lib/telemetry: Defining dependency "telemetry" 00:04:54.168 Checking for function "getentropy" : NO 00:04:54.168 Message: lib/eal: Defining dependency "eal" 00:04:54.168 Message: lib/ring: Defining dependency "ring" 00:04:54.168 Message: lib/rcu: Defining dependency "rcu" 00:04:54.168 Message: lib/mempool: Defining dependency "mempool" 00:04:54.168 Message: lib/mbuf: Defining dependency "mbuf" 00:04:54.168 Fetching value of define "__PCLMUL__" : 1 (cached) 00:04:54.168 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:04:54.168 Compiler for C supports arguments -mpclmul: YES 00:04:54.168 Compiler for C supports arguments -maes: YES 00:04:54.168 Compiler for C supports arguments -mavx512f: YES (cached) 00:04:54.168 Compiler for C supports arguments -mavx512bw: YES 00:04:54.168 Compiler for C supports arguments -mavx512dq: YES 00:04:54.168 Compiler for C supports arguments -mavx512vl: YES 00:04:54.168 Compiler for C supports arguments -mvpclmulqdq: YES 00:04:54.168 Compiler for C supports arguments -mavx2: YES 00:04:54.168 Compiler for C supports arguments -mavx: YES 00:04:54.168 Message: lib/net: Defining dependency "net" 00:04:54.168 Message: lib/meter: Defining dependency "meter" 00:04:54.168 Message: lib/ethdev: Defining dependency "ethdev" 00:04:54.168 Message: lib/pci: Defining dependency "pci" 00:04:54.168 Message: lib/cmdline: Defining dependency "cmdline" 00:04:54.168 Message: lib/hash: Defining dependency "hash" 00:04:54.168 Message: lib/timer: Defining dependency "timer" 00:04:54.168 Message: lib/compressdev: Defining dependency "compressdev" 00:04:54.168 Message: lib/cryptodev: Defining dependency "cryptodev" 00:04:54.168 Message: lib/dmadev: Defining dependency "dmadev" 00:04:54.168 Compiler for C supports arguments -Wno-cast-qual: YES 00:04:54.168 Message: lib/power: Defining dependency "power" 00:04:54.168 Message: lib/reorder: Defining dependency "reorder" 00:04:54.168 Message: lib/security: Defining dependency "security" 00:04:54.168 Has header "linux/userfaultfd.h" : YES 00:04:54.168 Has header "linux/vduse.h" : YES 00:04:54.168 Message: lib/vhost: Defining dependency "vhost" 00:04:54.168 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:04:54.168 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:04:54.168 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:04:54.168 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:04:54.168 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:04:54.168 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:04:54.168 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:04:54.168 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:04:54.168 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:04:54.168 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:04:54.168 Program doxygen found: YES (/usr/local/bin/doxygen) 00:04:54.168 Configuring doxy-api-html.conf using configuration 00:04:54.168 Configuring doxy-api-man.conf using configuration 00:04:54.168 Program mandb found: YES (/usr/bin/mandb) 00:04:54.168 Program sphinx-build found: NO 00:04:54.168 Configuring rte_build_config.h using configuration 00:04:54.168 Message: 00:04:54.168 ================= 00:04:54.168 Applications Enabled 00:04:54.168 ================= 00:04:54.168 00:04:54.168 apps: 00:04:54.168 00:04:54.168 00:04:54.168 Message: 00:04:54.168 ================= 00:04:54.168 Libraries Enabled 00:04:54.168 ================= 00:04:54.168 00:04:54.168 libs: 00:04:54.168 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:04:54.168 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:04:54.168 cryptodev, dmadev, power, reorder, security, vhost, 00:04:54.168 00:04:54.168 Message: 00:04:54.168 =============== 00:04:54.168 Drivers Enabled 00:04:54.168 =============== 00:04:54.168 00:04:54.168 common: 00:04:54.168 00:04:54.168 bus: 00:04:54.168 pci, vdev, 00:04:54.168 mempool: 00:04:54.168 ring, 00:04:54.168 dma: 00:04:54.168 00:04:54.168 net: 00:04:54.168 00:04:54.168 crypto: 00:04:54.168 00:04:54.168 compress: 00:04:54.168 00:04:54.168 vdpa: 00:04:54.168 00:04:54.168 00:04:54.168 Message: 00:04:54.168 ================= 00:04:54.168 Content Skipped 00:04:54.168 ================= 00:04:54.168 00:04:54.168 apps: 00:04:54.168 dumpcap: explicitly disabled via build config 00:04:54.168 graph: explicitly disabled via build config 00:04:54.168 pdump: explicitly disabled via build config 00:04:54.168 proc-info: explicitly disabled via build config 00:04:54.168 test-acl: explicitly disabled via build config 00:04:54.168 test-bbdev: explicitly disabled via build config 00:04:54.168 test-cmdline: explicitly disabled via build config 00:04:54.168 test-compress-perf: explicitly disabled via build config 00:04:54.168 test-crypto-perf: explicitly disabled via build config 00:04:54.168 test-dma-perf: explicitly disabled via build config 00:04:54.168 test-eventdev: explicitly disabled via build config 00:04:54.168 test-fib: explicitly disabled via build config 00:04:54.168 test-flow-perf: explicitly disabled via build config 00:04:54.168 test-gpudev: explicitly disabled via build config 00:04:54.168 test-mldev: explicitly disabled via build config 00:04:54.168 test-pipeline: explicitly disabled via build config 00:04:54.168 test-pmd: explicitly disabled via build config 00:04:54.168 test-regex: explicitly disabled via build config 00:04:54.168 test-sad: explicitly disabled via build config 00:04:54.168 test-security-perf: explicitly disabled via build config 00:04:54.168 00:04:54.168 libs: 00:04:54.168 argparse: explicitly disabled via build config 00:04:54.168 metrics: explicitly disabled via build config 00:04:54.168 acl: explicitly disabled via build config 00:04:54.168 bbdev: explicitly disabled via build config 00:04:54.168 bitratestats: explicitly disabled via build config 00:04:54.168 bpf: explicitly disabled via build config 00:04:54.168 cfgfile: explicitly disabled via build config 00:04:54.168 distributor: explicitly disabled via build config 00:04:54.168 efd: explicitly disabled via build config 00:04:54.168 eventdev: explicitly disabled via build config 00:04:54.168 dispatcher: explicitly disabled via build config 00:04:54.168 gpudev: explicitly disabled via build config 00:04:54.168 gro: explicitly disabled via build config 00:04:54.168 gso: explicitly disabled via build config 00:04:54.168 ip_frag: explicitly disabled via build config 00:04:54.168 jobstats: explicitly disabled via build config 00:04:54.168 latencystats: explicitly disabled via build config 00:04:54.168 lpm: explicitly disabled via build config 00:04:54.168 member: explicitly disabled via build config 00:04:54.168 pcapng: explicitly disabled via build config 00:04:54.168 rawdev: explicitly disabled via build config 00:04:54.168 regexdev: explicitly disabled via build config 00:04:54.168 mldev: explicitly disabled via build config 00:04:54.168 rib: explicitly disabled via build config 00:04:54.168 sched: explicitly disabled via build config 00:04:54.168 stack: explicitly disabled via build config 00:04:54.168 ipsec: explicitly disabled via build config 00:04:54.168 pdcp: explicitly disabled via build config 00:04:54.168 fib: explicitly disabled via build config 00:04:54.168 port: explicitly disabled via build config 00:04:54.168 pdump: explicitly disabled via build config 00:04:54.168 table: explicitly disabled via build config 00:04:54.168 pipeline: explicitly disabled via build config 00:04:54.168 graph: explicitly disabled via build config 00:04:54.168 node: explicitly disabled via build config 00:04:54.168 00:04:54.168 drivers: 00:04:54.168 common/cpt: not in enabled drivers build config 00:04:54.168 common/dpaax: not in enabled drivers build config 00:04:54.168 common/iavf: not in enabled drivers build config 00:04:54.168 common/idpf: not in enabled drivers build config 00:04:54.168 common/ionic: not in enabled drivers build config 00:04:54.168 common/mvep: not in enabled drivers build config 00:04:54.168 common/octeontx: not in enabled drivers build config 00:04:54.168 bus/auxiliary: not in enabled drivers build config 00:04:54.168 bus/cdx: not in enabled drivers build config 00:04:54.168 bus/dpaa: not in enabled drivers build config 00:04:54.168 bus/fslmc: not in enabled drivers build config 00:04:54.168 bus/ifpga: not in enabled drivers build config 00:04:54.168 bus/platform: not in enabled drivers build config 00:04:54.168 bus/uacce: not in enabled drivers build config 00:04:54.168 bus/vmbus: not in enabled drivers build config 00:04:54.168 common/cnxk: not in enabled drivers build config 00:04:54.168 common/mlx5: not in enabled drivers build config 00:04:54.168 common/nfp: not in enabled drivers build config 00:04:54.168 common/nitrox: not in enabled drivers build config 00:04:54.168 common/qat: not in enabled drivers build config 00:04:54.168 common/sfc_efx: not in enabled drivers build config 00:04:54.168 mempool/bucket: not in enabled drivers build config 00:04:54.168 mempool/cnxk: not in enabled drivers build config 00:04:54.168 mempool/dpaa: not in enabled drivers build config 00:04:54.168 mempool/dpaa2: not in enabled drivers build config 00:04:54.168 mempool/octeontx: not in enabled drivers build config 00:04:54.168 mempool/stack: not in enabled drivers build config 00:04:54.168 dma/cnxk: not in enabled drivers build config 00:04:54.168 dma/dpaa: not in enabled drivers build config 00:04:54.168 dma/dpaa2: not in enabled drivers build config 00:04:54.168 dma/hisilicon: not in enabled drivers build config 00:04:54.168 dma/idxd: not in enabled drivers build config 00:04:54.168 dma/ioat: not in enabled drivers build config 00:04:54.168 dma/skeleton: not in enabled drivers build config 00:04:54.168 net/af_packet: not in enabled drivers build config 00:04:54.168 net/af_xdp: not in enabled drivers build config 00:04:54.168 net/ark: not in enabled drivers build config 00:04:54.168 net/atlantic: not in enabled drivers build config 00:04:54.168 net/avp: not in enabled drivers build config 00:04:54.168 net/axgbe: not in enabled drivers build config 00:04:54.168 net/bnx2x: not in enabled drivers build config 00:04:54.168 net/bnxt: not in enabled drivers build config 00:04:54.169 net/bonding: not in enabled drivers build config 00:04:54.169 net/cnxk: not in enabled drivers build config 00:04:54.169 net/cpfl: not in enabled drivers build config 00:04:54.169 net/cxgbe: not in enabled drivers build config 00:04:54.169 net/dpaa: not in enabled drivers build config 00:04:54.169 net/dpaa2: not in enabled drivers build config 00:04:54.169 net/e1000: not in enabled drivers build config 00:04:54.169 net/ena: not in enabled drivers build config 00:04:54.169 net/enetc: not in enabled drivers build config 00:04:54.169 net/enetfec: not in enabled drivers build config 00:04:54.169 net/enic: not in enabled drivers build config 00:04:54.169 net/failsafe: not in enabled drivers build config 00:04:54.169 net/fm10k: not in enabled drivers build config 00:04:54.169 net/gve: not in enabled drivers build config 00:04:54.169 net/hinic: not in enabled drivers build config 00:04:54.169 net/hns3: not in enabled drivers build config 00:04:54.169 net/i40e: not in enabled drivers build config 00:04:54.169 net/iavf: not in enabled drivers build config 00:04:54.169 net/ice: not in enabled drivers build config 00:04:54.169 net/idpf: not in enabled drivers build config 00:04:54.169 net/igc: not in enabled drivers build config 00:04:54.169 net/ionic: not in enabled drivers build config 00:04:54.169 net/ipn3ke: not in enabled drivers build config 00:04:54.169 net/ixgbe: not in enabled drivers build config 00:04:54.169 net/mana: not in enabled drivers build config 00:04:54.169 net/memif: not in enabled drivers build config 00:04:54.169 net/mlx4: not in enabled drivers build config 00:04:54.169 net/mlx5: not in enabled drivers build config 00:04:54.169 net/mvneta: not in enabled drivers build config 00:04:54.169 net/mvpp2: not in enabled drivers build config 00:04:54.169 net/netvsc: not in enabled drivers build config 00:04:54.169 net/nfb: not in enabled drivers build config 00:04:54.169 net/nfp: not in enabled drivers build config 00:04:54.169 net/ngbe: not in enabled drivers build config 00:04:54.169 net/null: not in enabled drivers build config 00:04:54.169 net/octeontx: not in enabled drivers build config 00:04:54.169 net/octeon_ep: not in enabled drivers build config 00:04:54.169 net/pcap: not in enabled drivers build config 00:04:54.169 net/pfe: not in enabled drivers build config 00:04:54.169 net/qede: not in enabled drivers build config 00:04:54.169 net/ring: not in enabled drivers build config 00:04:54.169 net/sfc: not in enabled drivers build config 00:04:54.169 net/softnic: not in enabled drivers build config 00:04:54.169 net/tap: not in enabled drivers build config 00:04:54.169 net/thunderx: not in enabled drivers build config 00:04:54.169 net/txgbe: not in enabled drivers build config 00:04:54.169 net/vdev_netvsc: not in enabled drivers build config 00:04:54.169 net/vhost: not in enabled drivers build config 00:04:54.169 net/virtio: not in enabled drivers build config 00:04:54.169 net/vmxnet3: not in enabled drivers build config 00:04:54.169 raw/*: missing internal dependency, "rawdev" 00:04:54.169 crypto/armv8: not in enabled drivers build config 00:04:54.169 crypto/bcmfs: not in enabled drivers build config 00:04:54.169 crypto/caam_jr: not in enabled drivers build config 00:04:54.169 crypto/ccp: not in enabled drivers build config 00:04:54.169 crypto/cnxk: not in enabled drivers build config 00:04:54.169 crypto/dpaa_sec: not in enabled drivers build config 00:04:54.169 crypto/dpaa2_sec: not in enabled drivers build config 00:04:54.169 crypto/ipsec_mb: not in enabled drivers build config 00:04:54.169 crypto/mlx5: not in enabled drivers build config 00:04:54.169 crypto/mvsam: not in enabled drivers build config 00:04:54.169 crypto/nitrox: not in enabled drivers build config 00:04:54.169 crypto/null: not in enabled drivers build config 00:04:54.169 crypto/octeontx: not in enabled drivers build config 00:04:54.169 crypto/openssl: not in enabled drivers build config 00:04:54.169 crypto/scheduler: not in enabled drivers build config 00:04:54.169 crypto/uadk: not in enabled drivers build config 00:04:54.169 crypto/virtio: not in enabled drivers build config 00:04:54.169 compress/isal: not in enabled drivers build config 00:04:54.169 compress/mlx5: not in enabled drivers build config 00:04:54.169 compress/nitrox: not in enabled drivers build config 00:04:54.169 compress/octeontx: not in enabled drivers build config 00:04:54.169 compress/zlib: not in enabled drivers build config 00:04:54.169 regex/*: missing internal dependency, "regexdev" 00:04:54.169 ml/*: missing internal dependency, "mldev" 00:04:54.169 vdpa/ifc: not in enabled drivers build config 00:04:54.169 vdpa/mlx5: not in enabled drivers build config 00:04:54.169 vdpa/nfp: not in enabled drivers build config 00:04:54.169 vdpa/sfc: not in enabled drivers build config 00:04:54.169 event/*: missing internal dependency, "eventdev" 00:04:54.169 baseband/*: missing internal dependency, "bbdev" 00:04:54.169 gpu/*: missing internal dependency, "gpudev" 00:04:54.169 00:04:54.169 00:04:54.169 Build targets in project: 85 00:04:54.169 00:04:54.169 DPDK 24.03.0 00:04:54.169 00:04:54.169 User defined options 00:04:54.169 buildtype : debug 00:04:54.169 default_library : shared 00:04:54.169 libdir : lib 00:04:54.169 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:04:54.169 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:04:54.169 c_link_args : 00:04:54.169 cpu_instruction_set: native 00:04:54.169 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:04:54.169 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:04:54.169 enable_docs : false 00:04:54.169 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:04:54.169 enable_kmods : false 00:04:54.169 max_lcores : 128 00:04:54.169 tests : false 00:04:54.169 00:04:54.169 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:04:54.169 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:04:54.169 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:04:54.169 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:04:54.169 [3/268] Linking static target lib/librte_kvargs.a 00:04:54.169 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:04:54.169 [5/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:04:54.169 [6/268] Linking static target lib/librte_log.a 00:04:54.428 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:04:54.428 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:04:54.428 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:04:54.687 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:04:54.687 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:04:54.687 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:04:54.687 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:04:54.687 [14/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:04:54.946 [15/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:04:54.946 [16/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:04:54.946 [17/268] Linking static target lib/librte_telemetry.a 00:04:54.946 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:04:54.946 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:04:54.946 [20/268] Linking target lib/librte_log.so.24.1 00:04:55.205 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:04:55.205 [22/268] Linking target lib/librte_kvargs.so.24.1 00:04:55.464 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:04:55.464 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:04:55.464 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:04:55.464 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:04:55.464 [27/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:04:55.723 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:04:55.723 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:04:55.723 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:04:55.723 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:04:55.981 [32/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:04:55.981 [33/268] Linking target lib/librte_telemetry.so.24.1 00:04:55.981 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:04:55.981 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:04:56.241 [36/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:04:56.241 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:04:56.241 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:04:56.241 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:04:56.502 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:04:56.502 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:04:56.502 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:04:56.502 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:04:56.761 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:04:56.761 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:04:56.761 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:04:56.761 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:04:57.021 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:04:57.021 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:04:57.021 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:04:57.021 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:04:57.280 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:04:57.280 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:04:57.539 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:04:57.539 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:04:57.539 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:04:57.799 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:04:57.799 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:04:57.799 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:04:58.058 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:04:58.058 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:04:58.058 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:04:58.318 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:04:58.318 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:04:58.318 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:04:58.576 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:04:58.576 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:04:58.835 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:04:58.835 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:04:58.835 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:04:58.835 [71/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:04:58.835 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:04:59.094 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:04:59.094 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:04:59.094 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:04:59.094 [76/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:04:59.094 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:04:59.094 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:04:59.094 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:04:59.352 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:04:59.612 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:04:59.612 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:04:59.612 [83/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:04:59.612 [84/268] Linking static target lib/librte_ring.a 00:04:59.612 [85/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:04:59.612 [86/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:04:59.612 [87/268] Linking static target lib/librte_rcu.a 00:04:59.872 [88/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:04:59.872 [89/268] Linking static target lib/librte_eal.a 00:05:00.130 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:05:00.130 [91/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:05:00.130 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:05:00.130 [93/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:05:00.130 [94/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:05:00.130 [95/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:05:00.130 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:05:00.390 [97/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:05:00.390 [98/268] Linking static target lib/librte_mempool.a 00:05:00.390 [99/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:05:00.649 [100/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:05:00.649 [101/268] Linking static target lib/librte_mbuf.a 00:05:00.908 [102/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:05:00.908 [103/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:05:00.908 [104/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:05:00.908 [105/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:05:00.908 [106/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:05:00.908 [107/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:05:01.168 [108/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:05:01.168 [109/268] Linking static target lib/librte_net.a 00:05:01.168 [110/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:05:01.168 [111/268] Linking static target lib/librte_meter.a 00:05:01.427 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:05:01.427 [113/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:05:01.686 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:05:01.686 [115/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:05:01.686 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:05:01.686 [117/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:05:01.686 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:05:01.687 [119/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:05:02.254 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:05:02.254 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:05:02.562 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:05:02.562 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:05:02.562 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:05:02.820 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:05:02.820 [126/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:05:02.820 [127/268] Linking static target lib/librte_pci.a 00:05:02.820 [128/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:05:03.079 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:05:03.079 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:05:03.079 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:05:03.079 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:05:03.079 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:05:03.079 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:05:03.079 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:05:03.079 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:05:03.079 [137/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:05:03.338 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:05:03.338 [139/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:05:03.338 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:05:03.338 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:05:03.338 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:05:03.338 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:05:03.338 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:05:03.338 [145/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:05:03.597 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:05:03.597 [147/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:05:03.597 [148/268] Linking static target lib/librte_cmdline.a 00:05:03.856 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:05:03.856 [150/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:05:03.856 [151/268] Linking static target lib/librte_ethdev.a 00:05:04.116 [152/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:05:04.116 [153/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:05:04.116 [154/268] Linking static target lib/librte_timer.a 00:05:04.116 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:05:04.116 [156/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:05:04.375 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:05:04.375 [158/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:05:04.375 [159/268] Linking static target lib/librte_hash.a 00:05:04.635 [160/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:05:04.635 [161/268] Linking static target lib/librte_compressdev.a 00:05:04.635 [162/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:05:04.635 [163/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:05:04.894 [164/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:05:04.894 [165/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:05:05.153 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:05:05.153 [167/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:05:05.153 [168/268] Linking static target lib/librte_dmadev.a 00:05:05.412 [169/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:05:05.412 [170/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:05:05.412 [171/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:05:05.412 [172/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:05:05.412 [173/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:05:05.413 [174/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:05:05.413 [175/268] Linking static target lib/librte_cryptodev.a 00:05:05.413 [176/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:05.671 [177/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:05:05.930 [178/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:05:05.930 [179/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:05:05.930 [180/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:05:05.930 [181/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:05:06.189 [182/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:06.189 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:05:06.189 [184/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:05:06.449 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:05:06.449 [186/268] Linking static target lib/librte_power.a 00:05:06.709 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:05:06.709 [188/268] Linking static target lib/librte_reorder.a 00:05:06.709 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:05:06.968 [190/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:05:06.968 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:05:06.968 [192/268] Linking static target lib/librte_security.a 00:05:06.968 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:05:07.227 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:05:07.227 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:05:07.485 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:05:07.743 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:05:07.743 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:05:07.743 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:05:08.000 [200/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:08.000 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:05:08.000 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:05:08.566 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:05:08.566 [204/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:05:08.566 [205/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:05:08.824 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:05:08.824 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:05:08.824 [208/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:05:08.824 [209/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:05:08.824 [210/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:05:08.824 [211/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:05:08.824 [212/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:05:09.109 [213/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:05:09.109 [214/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:05:09.109 [215/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:05:09.109 [216/268] Linking static target drivers/librte_bus_pci.a 00:05:09.109 [217/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:05:09.109 [218/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:05:09.109 [219/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:05:09.109 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:05:09.109 [221/268] Linking static target drivers/librte_bus_vdev.a 00:05:09.109 [222/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:05:09.367 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:05:09.367 [224/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:05:09.367 [225/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:05:09.367 [226/268] Linking static target drivers/librte_mempool_ring.a 00:05:09.367 [227/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:09.625 [228/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:05:09.883 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:05:09.883 [230/268] Linking static target lib/librte_vhost.a 00:05:10.824 [231/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:05:10.824 [232/268] Linking target lib/librte_eal.so.24.1 00:05:11.082 [233/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:05:11.082 [234/268] Linking target lib/librte_meter.so.24.1 00:05:11.082 [235/268] Linking target lib/librte_dmadev.so.24.1 00:05:11.082 [236/268] Linking target lib/librte_ring.so.24.1 00:05:11.082 [237/268] Linking target drivers/librte_bus_vdev.so.24.1 00:05:11.082 [238/268] Linking target lib/librte_pci.so.24.1 00:05:11.082 [239/268] Linking target lib/librte_timer.so.24.1 00:05:11.082 [240/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:05:11.082 [241/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:05:11.082 [242/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:05:11.082 [243/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:05:11.082 [244/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:05:11.082 [245/268] Linking target drivers/librte_bus_pci.so.24.1 00:05:11.082 [246/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:05:11.340 [247/268] Linking target lib/librte_rcu.so.24.1 00:05:11.340 [248/268] Linking target lib/librte_mempool.so.24.1 00:05:11.340 [249/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:11.340 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:05:11.340 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:05:11.340 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:05:11.340 [253/268] Linking target lib/librte_mbuf.so.24.1 00:05:11.598 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:05:11.598 [255/268] Linking target lib/librte_net.so.24.1 00:05:11.598 [256/268] Linking target lib/librte_compressdev.so.24.1 00:05:11.598 [257/268] Linking target lib/librte_reorder.so.24.1 00:05:11.598 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:05:11.856 [259/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:05:11.856 [260/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:05:11.856 [261/268] Linking target lib/librte_cmdline.so.24.1 00:05:11.856 [262/268] Linking target lib/librte_hash.so.24.1 00:05:11.856 [263/268] Linking target lib/librte_security.so.24.1 00:05:11.856 [264/268] Linking target lib/librte_ethdev.so.24.1 00:05:11.856 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:05:11.856 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:05:12.114 [267/268] Linking target lib/librte_power.so.24.1 00:05:12.114 [268/268] Linking target lib/librte_vhost.so.24.1 00:05:12.114 INFO: autodetecting backend as ninja 00:05:12.114 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:05:38.667 CC lib/ut/ut.o 00:05:38.667 CC lib/ut_mock/mock.o 00:05:38.667 CC lib/log/log.o 00:05:38.667 CC lib/log/log_flags.o 00:05:38.667 CC lib/log/log_deprecated.o 00:05:38.667 LIB libspdk_ut.a 00:05:38.667 LIB libspdk_ut_mock.a 00:05:38.667 LIB libspdk_log.a 00:05:38.667 SO libspdk_ut.so.2.0 00:05:38.667 SO libspdk_ut_mock.so.6.0 00:05:38.667 SO libspdk_log.so.7.1 00:05:38.667 SYMLINK libspdk_ut.so 00:05:38.667 SYMLINK libspdk_ut_mock.so 00:05:38.667 SYMLINK libspdk_log.so 00:05:38.667 CXX lib/trace_parser/trace.o 00:05:38.667 CC lib/util/base64.o 00:05:38.667 CC lib/util/bit_array.o 00:05:38.667 CC lib/util/cpuset.o 00:05:38.667 CC lib/util/crc16.o 00:05:38.667 CC lib/util/crc32.o 00:05:38.667 CC lib/util/crc32c.o 00:05:38.667 CC lib/ioat/ioat.o 00:05:38.667 CC lib/dma/dma.o 00:05:38.667 CC lib/vfio_user/host/vfio_user_pci.o 00:05:38.667 CC lib/util/crc32_ieee.o 00:05:38.667 CC lib/vfio_user/host/vfio_user.o 00:05:38.667 CC lib/util/crc64.o 00:05:38.667 CC lib/util/dif.o 00:05:38.667 CC lib/util/fd.o 00:05:38.667 CC lib/util/fd_group.o 00:05:38.667 LIB libspdk_dma.a 00:05:38.667 SO libspdk_dma.so.5.0 00:05:38.667 CC lib/util/file.o 00:05:38.667 CC lib/util/hexlify.o 00:05:38.667 LIB libspdk_ioat.a 00:05:38.667 CC lib/util/iov.o 00:05:38.667 SO libspdk_ioat.so.7.0 00:05:38.667 SYMLINK libspdk_dma.so 00:05:38.667 CC lib/util/math.o 00:05:38.667 CC lib/util/net.o 00:05:38.667 SYMLINK libspdk_ioat.so 00:05:38.667 CC lib/util/pipe.o 00:05:38.667 LIB libspdk_vfio_user.a 00:05:38.667 CC lib/util/strerror_tls.o 00:05:38.667 SO libspdk_vfio_user.so.5.0 00:05:38.667 CC lib/util/string.o 00:05:38.667 SYMLINK libspdk_vfio_user.so 00:05:38.667 CC lib/util/uuid.o 00:05:38.667 CC lib/util/xor.o 00:05:38.667 CC lib/util/zipf.o 00:05:38.667 CC lib/util/md5.o 00:05:38.667 LIB libspdk_util.a 00:05:38.667 SO libspdk_util.so.10.1 00:05:38.667 LIB libspdk_trace_parser.a 00:05:38.667 SO libspdk_trace_parser.so.6.0 00:05:38.667 SYMLINK libspdk_util.so 00:05:38.667 SYMLINK libspdk_trace_parser.so 00:05:38.667 CC lib/idxd/idxd.o 00:05:38.667 CC lib/json/json_parse.o 00:05:38.667 CC lib/json/json_util.o 00:05:38.667 CC lib/rdma_utils/rdma_utils.o 00:05:38.667 CC lib/idxd/idxd_kernel.o 00:05:38.667 CC lib/idxd/idxd_user.o 00:05:38.667 CC lib/json/json_write.o 00:05:38.667 CC lib/conf/conf.o 00:05:38.667 CC lib/env_dpdk/env.o 00:05:38.667 CC lib/vmd/vmd.o 00:05:38.667 CC lib/vmd/led.o 00:05:38.667 LIB libspdk_conf.a 00:05:38.667 CC lib/env_dpdk/memory.o 00:05:38.667 CC lib/env_dpdk/pci.o 00:05:38.667 CC lib/env_dpdk/init.o 00:05:38.667 SO libspdk_conf.so.6.0 00:05:38.667 LIB libspdk_rdma_utils.a 00:05:38.667 LIB libspdk_json.a 00:05:38.667 SO libspdk_rdma_utils.so.1.0 00:05:38.667 SYMLINK libspdk_conf.so 00:05:38.667 CC lib/env_dpdk/threads.o 00:05:38.667 SO libspdk_json.so.6.0 00:05:38.667 CC lib/env_dpdk/pci_ioat.o 00:05:38.925 SYMLINK libspdk_rdma_utils.so 00:05:38.926 CC lib/env_dpdk/pci_virtio.o 00:05:38.926 SYMLINK libspdk_json.so 00:05:38.926 CC lib/env_dpdk/pci_vmd.o 00:05:38.926 CC lib/env_dpdk/pci_idxd.o 00:05:38.926 CC lib/rdma_provider/common.o 00:05:38.926 LIB libspdk_idxd.a 00:05:39.184 CC lib/env_dpdk/pci_event.o 00:05:39.184 SO libspdk_idxd.so.12.1 00:05:39.184 CC lib/env_dpdk/sigbus_handler.o 00:05:39.184 LIB libspdk_vmd.a 00:05:39.184 CC lib/env_dpdk/pci_dpdk.o 00:05:39.184 CC lib/env_dpdk/pci_dpdk_2207.o 00:05:39.184 SYMLINK libspdk_idxd.so 00:05:39.184 CC lib/jsonrpc/jsonrpc_server.o 00:05:39.184 CC lib/env_dpdk/pci_dpdk_2211.o 00:05:39.184 SO libspdk_vmd.so.6.0 00:05:39.184 CC lib/rdma_provider/rdma_provider_verbs.o 00:05:39.184 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:05:39.184 SYMLINK libspdk_vmd.so 00:05:39.184 CC lib/jsonrpc/jsonrpc_client.o 00:05:39.184 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:05:39.442 LIB libspdk_rdma_provider.a 00:05:39.442 SO libspdk_rdma_provider.so.7.0 00:05:39.442 LIB libspdk_jsonrpc.a 00:05:39.442 SYMLINK libspdk_rdma_provider.so 00:05:39.700 SO libspdk_jsonrpc.so.6.0 00:05:39.700 SYMLINK libspdk_jsonrpc.so 00:05:39.959 CC lib/rpc/rpc.o 00:05:39.959 LIB libspdk_env_dpdk.a 00:05:39.959 SO libspdk_env_dpdk.so.15.1 00:05:40.216 LIB libspdk_rpc.a 00:05:40.216 SO libspdk_rpc.so.6.0 00:05:40.216 SYMLINK libspdk_env_dpdk.so 00:05:40.216 SYMLINK libspdk_rpc.so 00:05:40.473 CC lib/notify/notify.o 00:05:40.473 CC lib/notify/notify_rpc.o 00:05:40.473 CC lib/trace/trace.o 00:05:40.473 CC lib/trace/trace_rpc.o 00:05:40.473 CC lib/trace/trace_flags.o 00:05:40.473 CC lib/keyring/keyring.o 00:05:40.473 CC lib/keyring/keyring_rpc.o 00:05:40.731 LIB libspdk_notify.a 00:05:40.731 SO libspdk_notify.so.6.0 00:05:40.731 LIB libspdk_trace.a 00:05:40.731 LIB libspdk_keyring.a 00:05:40.731 SYMLINK libspdk_notify.so 00:05:40.731 SO libspdk_trace.so.11.0 00:05:40.731 SO libspdk_keyring.so.2.0 00:05:40.731 SYMLINK libspdk_trace.so 00:05:40.731 SYMLINK libspdk_keyring.so 00:05:40.989 CC lib/thread/thread.o 00:05:40.989 CC lib/sock/sock.o 00:05:40.989 CC lib/thread/iobuf.o 00:05:40.989 CC lib/sock/sock_rpc.o 00:05:41.555 LIB libspdk_sock.a 00:05:41.555 SO libspdk_sock.so.10.0 00:05:41.908 SYMLINK libspdk_sock.so 00:05:41.908 CC lib/nvme/nvme_ctrlr_cmd.o 00:05:41.908 CC lib/nvme/nvme_ctrlr.o 00:05:41.908 CC lib/nvme/nvme_pcie_common.o 00:05:41.908 CC lib/nvme/nvme_ns.o 00:05:41.908 CC lib/nvme/nvme_fabric.o 00:05:41.908 CC lib/nvme/nvme_ns_cmd.o 00:05:41.908 CC lib/nvme/nvme_qpair.o 00:05:41.908 CC lib/nvme/nvme_pcie.o 00:05:41.908 CC lib/nvme/nvme.o 00:05:42.872 LIB libspdk_thread.a 00:05:42.872 SO libspdk_thread.so.11.0 00:05:42.872 CC lib/nvme/nvme_quirks.o 00:05:42.872 CC lib/nvme/nvme_transport.o 00:05:42.872 SYMLINK libspdk_thread.so 00:05:42.872 CC lib/nvme/nvme_discovery.o 00:05:42.872 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:05:43.130 CC lib/accel/accel.o 00:05:43.130 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:05:43.130 CC lib/nvme/nvme_tcp.o 00:05:43.130 CC lib/nvme/nvme_opal.o 00:05:43.130 CC lib/nvme/nvme_io_msg.o 00:05:43.698 CC lib/blob/blobstore.o 00:05:43.698 CC lib/nvme/nvme_poll_group.o 00:05:43.698 CC lib/nvme/nvme_zns.o 00:05:43.698 CC lib/nvme/nvme_stubs.o 00:05:43.698 CC lib/nvme/nvme_auth.o 00:05:43.956 CC lib/init/json_config.o 00:05:43.956 CC lib/init/subsystem.o 00:05:44.216 CC lib/init/subsystem_rpc.o 00:05:44.216 CC lib/accel/accel_rpc.o 00:05:44.216 CC lib/virtio/virtio.o 00:05:44.216 CC lib/init/rpc.o 00:05:44.216 CC lib/virtio/virtio_vhost_user.o 00:05:44.216 CC lib/virtio/virtio_vfio_user.o 00:05:44.475 CC lib/accel/accel_sw.o 00:05:44.475 CC lib/virtio/virtio_pci.o 00:05:44.475 LIB libspdk_init.a 00:05:44.475 SO libspdk_init.so.6.0 00:05:44.475 CC lib/fsdev/fsdev.o 00:05:44.475 CC lib/fsdev/fsdev_io.o 00:05:44.475 CC lib/fsdev/fsdev_rpc.o 00:05:44.475 SYMLINK libspdk_init.so 00:05:44.475 CC lib/blob/request.o 00:05:44.733 LIB libspdk_virtio.a 00:05:44.733 CC lib/nvme/nvme_cuse.o 00:05:44.733 LIB libspdk_accel.a 00:05:44.733 CC lib/nvme/nvme_rdma.o 00:05:44.733 SO libspdk_virtio.so.7.0 00:05:44.733 SO libspdk_accel.so.16.0 00:05:44.733 CC lib/event/app.o 00:05:44.733 SYMLINK libspdk_virtio.so 00:05:44.733 CC lib/event/reactor.o 00:05:44.733 CC lib/event/log_rpc.o 00:05:44.733 SYMLINK libspdk_accel.so 00:05:44.733 CC lib/event/app_rpc.o 00:05:44.992 CC lib/blob/zeroes.o 00:05:44.992 CC lib/event/scheduler_static.o 00:05:44.992 CC lib/bdev/bdev.o 00:05:44.992 CC lib/bdev/bdev_rpc.o 00:05:45.251 CC lib/blob/blob_bs_dev.o 00:05:45.251 CC lib/bdev/bdev_zone.o 00:05:45.251 LIB libspdk_fsdev.a 00:05:45.251 CC lib/bdev/part.o 00:05:45.251 SO libspdk_fsdev.so.2.0 00:05:45.251 LIB libspdk_event.a 00:05:45.251 SO libspdk_event.so.14.0 00:05:45.251 SYMLINK libspdk_fsdev.so 00:05:45.251 CC lib/bdev/scsi_nvme.o 00:05:45.509 SYMLINK libspdk_event.so 00:05:45.509 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:05:46.444 LIB libspdk_nvme.a 00:05:46.444 LIB libspdk_fuse_dispatcher.a 00:05:46.444 SO libspdk_fuse_dispatcher.so.1.0 00:05:46.444 SYMLINK libspdk_fuse_dispatcher.so 00:05:46.444 SO libspdk_nvme.so.15.0 00:05:46.702 SYMLINK libspdk_nvme.so 00:05:46.702 LIB libspdk_blob.a 00:05:46.702 SO libspdk_blob.so.12.0 00:05:46.973 SYMLINK libspdk_blob.so 00:05:47.260 CC lib/lvol/lvol.o 00:05:47.260 CC lib/blobfs/tree.o 00:05:47.260 CC lib/blobfs/blobfs.o 00:05:48.195 LIB libspdk_bdev.a 00:05:48.195 LIB libspdk_blobfs.a 00:05:48.195 SO libspdk_bdev.so.17.0 00:05:48.195 LIB libspdk_lvol.a 00:05:48.195 SO libspdk_blobfs.so.11.0 00:05:48.195 SO libspdk_lvol.so.11.0 00:05:48.195 SYMLINK libspdk_bdev.so 00:05:48.195 SYMLINK libspdk_blobfs.so 00:05:48.195 SYMLINK libspdk_lvol.so 00:05:48.453 CC lib/ublk/ublk.o 00:05:48.453 CC lib/ublk/ublk_rpc.o 00:05:48.453 CC lib/nbd/nbd.o 00:05:48.453 CC lib/nbd/nbd_rpc.o 00:05:48.453 CC lib/ftl/ftl_core.o 00:05:48.453 CC lib/ftl/ftl_init.o 00:05:48.453 CC lib/ftl/ftl_layout.o 00:05:48.453 CC lib/nvmf/ctrlr.o 00:05:48.453 CC lib/ftl/ftl_debug.o 00:05:48.453 CC lib/scsi/dev.o 00:05:48.453 CC lib/ftl/ftl_io.o 00:05:48.453 CC lib/nvmf/ctrlr_discovery.o 00:05:48.711 CC lib/nvmf/ctrlr_bdev.o 00:05:48.711 CC lib/ftl/ftl_sb.o 00:05:48.711 CC lib/scsi/lun.o 00:05:48.711 CC lib/ftl/ftl_l2p.o 00:05:48.711 CC lib/scsi/port.o 00:05:48.711 CC lib/ftl/ftl_l2p_flat.o 00:05:48.711 LIB libspdk_nbd.a 00:05:48.970 CC lib/scsi/scsi.o 00:05:48.970 SO libspdk_nbd.so.7.0 00:05:48.970 SYMLINK libspdk_nbd.so 00:05:48.970 CC lib/scsi/scsi_bdev.o 00:05:48.970 CC lib/scsi/scsi_pr.o 00:05:48.970 CC lib/scsi/scsi_rpc.o 00:05:48.970 LIB libspdk_ublk.a 00:05:48.970 CC lib/ftl/ftl_nv_cache.o 00:05:48.970 CC lib/scsi/task.o 00:05:48.970 SO libspdk_ublk.so.3.0 00:05:48.970 CC lib/nvmf/subsystem.o 00:05:48.970 SYMLINK libspdk_ublk.so 00:05:48.970 CC lib/nvmf/nvmf.o 00:05:48.970 CC lib/nvmf/nvmf_rpc.o 00:05:49.229 CC lib/nvmf/transport.o 00:05:49.229 CC lib/nvmf/tcp.o 00:05:49.229 CC lib/ftl/ftl_band.o 00:05:49.229 CC lib/ftl/ftl_band_ops.o 00:05:49.487 LIB libspdk_scsi.a 00:05:49.487 SO libspdk_scsi.so.9.0 00:05:49.487 SYMLINK libspdk_scsi.so 00:05:49.487 CC lib/ftl/ftl_writer.o 00:05:49.746 CC lib/ftl/ftl_rq.o 00:05:49.746 CC lib/nvmf/stubs.o 00:05:50.004 CC lib/nvmf/mdns_server.o 00:05:50.004 CC lib/nvmf/rdma.o 00:05:50.004 CC lib/ftl/ftl_reloc.o 00:05:50.004 CC lib/nvmf/auth.o 00:05:50.004 CC lib/vhost/vhost.o 00:05:50.004 CC lib/iscsi/conn.o 00:05:50.004 CC lib/iscsi/init_grp.o 00:05:50.004 CC lib/ftl/ftl_l2p_cache.o 00:05:50.262 CC lib/ftl/ftl_p2l.o 00:05:50.262 CC lib/vhost/vhost_rpc.o 00:05:50.262 CC lib/vhost/vhost_scsi.o 00:05:50.520 CC lib/iscsi/iscsi.o 00:05:50.778 CC lib/vhost/vhost_blk.o 00:05:50.778 CC lib/ftl/ftl_p2l_log.o 00:05:50.778 CC lib/iscsi/param.o 00:05:50.778 CC lib/iscsi/portal_grp.o 00:05:50.778 CC lib/vhost/rte_vhost_user.o 00:05:51.036 CC lib/ftl/mngt/ftl_mngt.o 00:05:51.036 CC lib/iscsi/tgt_node.o 00:05:51.036 CC lib/iscsi/iscsi_subsystem.o 00:05:51.036 CC lib/iscsi/iscsi_rpc.o 00:05:51.036 CC lib/iscsi/task.o 00:05:51.294 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:05:51.294 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:05:51.294 CC lib/ftl/mngt/ftl_mngt_startup.o 00:05:51.553 CC lib/ftl/mngt/ftl_mngt_md.o 00:05:51.553 CC lib/ftl/mngt/ftl_mngt_misc.o 00:05:51.553 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:05:51.553 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:05:51.553 CC lib/ftl/mngt/ftl_mngt_band.o 00:05:51.553 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:05:51.812 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:05:51.812 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:05:51.812 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:05:51.812 CC lib/ftl/utils/ftl_conf.o 00:05:51.812 CC lib/ftl/utils/ftl_md.o 00:05:51.812 LIB libspdk_iscsi.a 00:05:51.812 CC lib/ftl/utils/ftl_mempool.o 00:05:51.812 CC lib/ftl/utils/ftl_bitmap.o 00:05:51.812 SO libspdk_iscsi.so.8.0 00:05:51.812 LIB libspdk_vhost.a 00:05:52.071 CC lib/ftl/utils/ftl_property.o 00:05:52.071 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:05:52.071 LIB libspdk_nvmf.a 00:05:52.071 SO libspdk_vhost.so.8.0 00:05:52.071 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:05:52.071 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:05:52.071 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:05:52.071 SYMLINK libspdk_iscsi.so 00:05:52.071 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:05:52.071 SYMLINK libspdk_vhost.so 00:05:52.071 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:05:52.071 SO libspdk_nvmf.so.20.0 00:05:52.329 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:05:52.329 CC lib/ftl/upgrade/ftl_sb_v3.o 00:05:52.329 CC lib/ftl/upgrade/ftl_sb_v5.o 00:05:52.329 CC lib/ftl/nvc/ftl_nvc_dev.o 00:05:52.329 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:05:52.329 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:05:52.329 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:05:52.329 CC lib/ftl/base/ftl_base_dev.o 00:05:52.329 SYMLINK libspdk_nvmf.so 00:05:52.329 CC lib/ftl/base/ftl_base_bdev.o 00:05:52.329 CC lib/ftl/ftl_trace.o 00:05:52.587 LIB libspdk_ftl.a 00:05:52.847 SO libspdk_ftl.so.9.0 00:05:53.105 SYMLINK libspdk_ftl.so 00:05:53.679 CC module/env_dpdk/env_dpdk_rpc.o 00:05:53.679 CC module/fsdev/aio/fsdev_aio.o 00:05:53.679 CC module/accel/error/accel_error.o 00:05:53.679 CC module/sock/posix/posix.o 00:05:53.679 CC module/sock/uring/uring.o 00:05:53.679 CC module/keyring/file/keyring.o 00:05:53.679 CC module/blob/bdev/blob_bdev.o 00:05:53.679 CC module/scheduler/dynamic/scheduler_dynamic.o 00:05:53.679 CC module/accel/dsa/accel_dsa.o 00:05:53.679 CC module/accel/ioat/accel_ioat.o 00:05:53.679 LIB libspdk_env_dpdk_rpc.a 00:05:53.679 SO libspdk_env_dpdk_rpc.so.6.0 00:05:53.679 CC module/keyring/file/keyring_rpc.o 00:05:53.679 SYMLINK libspdk_env_dpdk_rpc.so 00:05:53.679 CC module/accel/dsa/accel_dsa_rpc.o 00:05:53.937 CC module/accel/ioat/accel_ioat_rpc.o 00:05:53.937 CC module/accel/error/accel_error_rpc.o 00:05:53.937 LIB libspdk_scheduler_dynamic.a 00:05:53.937 SO libspdk_scheduler_dynamic.so.4.0 00:05:53.937 LIB libspdk_blob_bdev.a 00:05:53.937 SO libspdk_blob_bdev.so.12.0 00:05:53.937 LIB libspdk_keyring_file.a 00:05:53.937 LIB libspdk_accel_dsa.a 00:05:53.937 SYMLINK libspdk_scheduler_dynamic.so 00:05:53.937 SO libspdk_keyring_file.so.2.0 00:05:53.937 LIB libspdk_accel_error.a 00:05:53.937 LIB libspdk_accel_ioat.a 00:05:53.937 SO libspdk_accel_dsa.so.5.0 00:05:53.937 SYMLINK libspdk_blob_bdev.so 00:05:53.937 SO libspdk_accel_ioat.so.6.0 00:05:53.937 SO libspdk_accel_error.so.2.0 00:05:53.937 SYMLINK libspdk_keyring_file.so 00:05:54.195 SYMLINK libspdk_accel_dsa.so 00:05:54.195 SYMLINK libspdk_accel_ioat.so 00:05:54.195 SYMLINK libspdk_accel_error.so 00:05:54.195 CC module/fsdev/aio/fsdev_aio_rpc.o 00:05:54.195 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:05:54.195 CC module/scheduler/gscheduler/gscheduler.o 00:05:54.195 CC module/accel/iaa/accel_iaa.o 00:05:54.195 CC module/keyring/linux/keyring.o 00:05:54.195 LIB libspdk_scheduler_dpdk_governor.a 00:05:54.454 LIB libspdk_scheduler_gscheduler.a 00:05:54.454 CC module/fsdev/aio/linux_aio_mgr.o 00:05:54.454 LIB libspdk_sock_uring.a 00:05:54.454 LIB libspdk_sock_posix.a 00:05:54.454 SO libspdk_scheduler_gscheduler.so.4.0 00:05:54.454 SO libspdk_scheduler_dpdk_governor.so.4.0 00:05:54.454 SO libspdk_sock_uring.so.5.0 00:05:54.454 SO libspdk_sock_posix.so.6.0 00:05:54.454 CC module/blobfs/bdev/blobfs_bdev.o 00:05:54.454 CC module/bdev/delay/vbdev_delay.o 00:05:54.454 SYMLINK libspdk_scheduler_dpdk_governor.so 00:05:54.454 SYMLINK libspdk_scheduler_gscheduler.so 00:05:54.454 CC module/bdev/delay/vbdev_delay_rpc.o 00:05:54.454 CC module/keyring/linux/keyring_rpc.o 00:05:54.454 SYMLINK libspdk_sock_uring.so 00:05:54.454 CC module/bdev/error/vbdev_error.o 00:05:54.454 SYMLINK libspdk_sock_posix.so 00:05:54.454 CC module/accel/iaa/accel_iaa_rpc.o 00:05:54.454 LIB libspdk_fsdev_aio.a 00:05:54.454 LIB libspdk_keyring_linux.a 00:05:54.713 SO libspdk_fsdev_aio.so.1.0 00:05:54.713 SO libspdk_keyring_linux.so.1.0 00:05:54.713 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:05:54.713 CC module/bdev/gpt/gpt.o 00:05:54.713 CC module/bdev/malloc/bdev_malloc.o 00:05:54.713 CC module/bdev/malloc/bdev_malloc_rpc.o 00:05:54.713 CC module/bdev/lvol/vbdev_lvol.o 00:05:54.713 LIB libspdk_accel_iaa.a 00:05:54.713 SYMLINK libspdk_keyring_linux.so 00:05:54.713 CC module/bdev/gpt/vbdev_gpt.o 00:05:54.713 SO libspdk_accel_iaa.so.3.0 00:05:54.713 SYMLINK libspdk_fsdev_aio.so 00:05:54.713 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:05:54.713 CC module/bdev/error/vbdev_error_rpc.o 00:05:54.713 SYMLINK libspdk_accel_iaa.so 00:05:54.713 LIB libspdk_bdev_delay.a 00:05:54.713 LIB libspdk_blobfs_bdev.a 00:05:54.971 SO libspdk_bdev_delay.so.6.0 00:05:54.972 SO libspdk_blobfs_bdev.so.6.0 00:05:54.972 LIB libspdk_bdev_error.a 00:05:54.972 CC module/bdev/null/bdev_null.o 00:05:54.972 SYMLINK libspdk_bdev_delay.so 00:05:54.972 CC module/bdev/null/bdev_null_rpc.o 00:05:54.972 SYMLINK libspdk_blobfs_bdev.so 00:05:54.972 SO libspdk_bdev_error.so.6.0 00:05:54.972 LIB libspdk_bdev_gpt.a 00:05:54.972 SO libspdk_bdev_gpt.so.6.0 00:05:54.972 CC module/bdev/nvme/bdev_nvme.o 00:05:54.972 CC module/bdev/passthru/vbdev_passthru.o 00:05:54.972 SYMLINK libspdk_bdev_error.so 00:05:54.972 LIB libspdk_bdev_malloc.a 00:05:54.972 SYMLINK libspdk_bdev_gpt.so 00:05:54.972 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:05:54.972 SO libspdk_bdev_malloc.so.6.0 00:05:55.231 CC module/bdev/nvme/bdev_nvme_rpc.o 00:05:55.231 CC module/bdev/nvme/nvme_rpc.o 00:05:55.231 CC module/bdev/raid/bdev_raid.o 00:05:55.231 SYMLINK libspdk_bdev_malloc.so 00:05:55.231 LIB libspdk_bdev_lvol.a 00:05:55.231 CC module/bdev/split/vbdev_split.o 00:05:55.231 LIB libspdk_bdev_null.a 00:05:55.231 SO libspdk_bdev_lvol.so.6.0 00:05:55.231 SO libspdk_bdev_null.so.6.0 00:05:55.231 CC module/bdev/nvme/bdev_mdns_client.o 00:05:55.231 SYMLINK libspdk_bdev_lvol.so 00:05:55.231 SYMLINK libspdk_bdev_null.so 00:05:55.231 LIB libspdk_bdev_passthru.a 00:05:55.231 CC module/bdev/raid/bdev_raid_rpc.o 00:05:55.231 CC module/bdev/zone_block/vbdev_zone_block.o 00:05:55.231 SO libspdk_bdev_passthru.so.6.0 00:05:55.489 CC module/bdev/nvme/vbdev_opal.o 00:05:55.489 SYMLINK libspdk_bdev_passthru.so 00:05:55.489 CC module/bdev/nvme/vbdev_opal_rpc.o 00:05:55.489 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:05:55.489 CC module/bdev/uring/bdev_uring.o 00:05:55.489 CC module/bdev/split/vbdev_split_rpc.o 00:05:55.489 CC module/bdev/raid/bdev_raid_sb.o 00:05:55.748 CC module/bdev/raid/raid0.o 00:05:55.748 CC module/bdev/raid/raid1.o 00:05:55.748 LIB libspdk_bdev_split.a 00:05:55.748 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:05:55.748 CC module/bdev/uring/bdev_uring_rpc.o 00:05:55.748 SO libspdk_bdev_split.so.6.0 00:05:55.748 CC module/bdev/raid/concat.o 00:05:55.748 SYMLINK libspdk_bdev_split.so 00:05:55.748 LIB libspdk_bdev_zone_block.a 00:05:55.748 LIB libspdk_bdev_uring.a 00:05:56.008 SO libspdk_bdev_zone_block.so.6.0 00:05:56.008 SO libspdk_bdev_uring.so.6.0 00:05:56.008 CC module/bdev/aio/bdev_aio.o 00:05:56.008 CC module/bdev/aio/bdev_aio_rpc.o 00:05:56.008 SYMLINK libspdk_bdev_uring.so 00:05:56.008 SYMLINK libspdk_bdev_zone_block.so 00:05:56.008 CC module/bdev/ftl/bdev_ftl_rpc.o 00:05:56.008 CC module/bdev/ftl/bdev_ftl.o 00:05:56.008 CC module/bdev/iscsi/bdev_iscsi.o 00:05:56.008 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:05:56.008 LIB libspdk_bdev_raid.a 00:05:56.266 CC module/bdev/virtio/bdev_virtio_scsi.o 00:05:56.266 CC module/bdev/virtio/bdev_virtio_blk.o 00:05:56.266 CC module/bdev/virtio/bdev_virtio_rpc.o 00:05:56.266 SO libspdk_bdev_raid.so.6.0 00:05:56.266 LIB libspdk_bdev_aio.a 00:05:56.266 SYMLINK libspdk_bdev_raid.so 00:05:56.266 LIB libspdk_bdev_ftl.a 00:05:56.266 SO libspdk_bdev_aio.so.6.0 00:05:56.266 SO libspdk_bdev_ftl.so.6.0 00:05:56.266 SYMLINK libspdk_bdev_aio.so 00:05:56.525 SYMLINK libspdk_bdev_ftl.so 00:05:56.525 LIB libspdk_bdev_iscsi.a 00:05:56.525 SO libspdk_bdev_iscsi.so.6.0 00:05:56.525 SYMLINK libspdk_bdev_iscsi.so 00:05:56.783 LIB libspdk_bdev_virtio.a 00:05:56.783 SO libspdk_bdev_virtio.so.6.0 00:05:56.784 SYMLINK libspdk_bdev_virtio.so 00:05:57.720 LIB libspdk_bdev_nvme.a 00:05:57.720 SO libspdk_bdev_nvme.so.7.1 00:05:57.720 SYMLINK libspdk_bdev_nvme.so 00:05:58.287 CC module/event/subsystems/scheduler/scheduler.o 00:05:58.287 CC module/event/subsystems/sock/sock.o 00:05:58.287 CC module/event/subsystems/iobuf/iobuf.o 00:05:58.287 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:05:58.287 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:05:58.287 CC module/event/subsystems/fsdev/fsdev.o 00:05:58.287 CC module/event/subsystems/keyring/keyring.o 00:05:58.287 CC module/event/subsystems/vmd/vmd.o 00:05:58.287 CC module/event/subsystems/vmd/vmd_rpc.o 00:05:58.287 LIB libspdk_event_vhost_blk.a 00:05:58.287 LIB libspdk_event_scheduler.a 00:05:58.287 LIB libspdk_event_keyring.a 00:05:58.287 SO libspdk_event_vhost_blk.so.3.0 00:05:58.287 LIB libspdk_event_fsdev.a 00:05:58.287 LIB libspdk_event_sock.a 00:05:58.287 LIB libspdk_event_vmd.a 00:05:58.287 LIB libspdk_event_iobuf.a 00:05:58.287 SO libspdk_event_keyring.so.1.0 00:05:58.287 SO libspdk_event_scheduler.so.4.0 00:05:58.287 SO libspdk_event_fsdev.so.1.0 00:05:58.287 SO libspdk_event_sock.so.5.0 00:05:58.545 SO libspdk_event_vmd.so.6.0 00:05:58.545 SO libspdk_event_iobuf.so.3.0 00:05:58.545 SYMLINK libspdk_event_vhost_blk.so 00:05:58.545 SYMLINK libspdk_event_keyring.so 00:05:58.545 SYMLINK libspdk_event_scheduler.so 00:05:58.545 SYMLINK libspdk_event_fsdev.so 00:05:58.545 SYMLINK libspdk_event_sock.so 00:05:58.545 SYMLINK libspdk_event_vmd.so 00:05:58.545 SYMLINK libspdk_event_iobuf.so 00:05:58.803 CC module/event/subsystems/accel/accel.o 00:05:59.062 LIB libspdk_event_accel.a 00:05:59.062 SO libspdk_event_accel.so.6.0 00:05:59.062 SYMLINK libspdk_event_accel.so 00:05:59.320 CC module/event/subsystems/bdev/bdev.o 00:05:59.578 LIB libspdk_event_bdev.a 00:05:59.578 SO libspdk_event_bdev.so.6.0 00:05:59.578 SYMLINK libspdk_event_bdev.so 00:05:59.836 CC module/event/subsystems/scsi/scsi.o 00:05:59.836 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:05:59.836 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:05:59.836 CC module/event/subsystems/nbd/nbd.o 00:05:59.836 CC module/event/subsystems/ublk/ublk.o 00:06:00.122 LIB libspdk_event_nbd.a 00:06:00.122 LIB libspdk_event_ublk.a 00:06:00.122 LIB libspdk_event_scsi.a 00:06:00.122 SO libspdk_event_nbd.so.6.0 00:06:00.122 SO libspdk_event_ublk.so.3.0 00:06:00.122 SO libspdk_event_scsi.so.6.0 00:06:00.122 SYMLINK libspdk_event_nbd.so 00:06:00.122 SYMLINK libspdk_event_ublk.so 00:06:00.122 LIB libspdk_event_nvmf.a 00:06:00.122 SYMLINK libspdk_event_scsi.so 00:06:00.123 SO libspdk_event_nvmf.so.6.0 00:06:00.123 SYMLINK libspdk_event_nvmf.so 00:06:00.381 CC module/event/subsystems/iscsi/iscsi.o 00:06:00.381 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:06:00.639 LIB libspdk_event_vhost_scsi.a 00:06:00.639 LIB libspdk_event_iscsi.a 00:06:00.639 SO libspdk_event_vhost_scsi.so.3.0 00:06:00.639 SO libspdk_event_iscsi.so.6.0 00:06:00.639 SYMLINK libspdk_event_vhost_scsi.so 00:06:00.639 SYMLINK libspdk_event_iscsi.so 00:06:00.897 SO libspdk.so.6.0 00:06:00.897 SYMLINK libspdk.so 00:06:01.156 TEST_HEADER include/spdk/accel.h 00:06:01.156 TEST_HEADER include/spdk/accel_module.h 00:06:01.156 CC app/trace_record/trace_record.o 00:06:01.156 TEST_HEADER include/spdk/assert.h 00:06:01.156 CXX app/trace/trace.o 00:06:01.156 TEST_HEADER include/spdk/barrier.h 00:06:01.156 TEST_HEADER include/spdk/base64.h 00:06:01.156 TEST_HEADER include/spdk/bdev.h 00:06:01.156 TEST_HEADER include/spdk/bdev_module.h 00:06:01.156 TEST_HEADER include/spdk/bdev_zone.h 00:06:01.156 TEST_HEADER include/spdk/bit_array.h 00:06:01.156 TEST_HEADER include/spdk/bit_pool.h 00:06:01.156 TEST_HEADER include/spdk/blob_bdev.h 00:06:01.156 TEST_HEADER include/spdk/blobfs_bdev.h 00:06:01.156 TEST_HEADER include/spdk/blobfs.h 00:06:01.156 TEST_HEADER include/spdk/blob.h 00:06:01.156 TEST_HEADER include/spdk/conf.h 00:06:01.156 TEST_HEADER include/spdk/config.h 00:06:01.156 TEST_HEADER include/spdk/cpuset.h 00:06:01.156 CC examples/interrupt_tgt/interrupt_tgt.o 00:06:01.156 TEST_HEADER include/spdk/crc16.h 00:06:01.156 TEST_HEADER include/spdk/crc32.h 00:06:01.156 TEST_HEADER include/spdk/crc64.h 00:06:01.156 TEST_HEADER include/spdk/dif.h 00:06:01.156 TEST_HEADER include/spdk/dma.h 00:06:01.156 TEST_HEADER include/spdk/endian.h 00:06:01.156 TEST_HEADER include/spdk/env_dpdk.h 00:06:01.156 TEST_HEADER include/spdk/env.h 00:06:01.156 TEST_HEADER include/spdk/event.h 00:06:01.156 TEST_HEADER include/spdk/fd_group.h 00:06:01.156 CC app/nvmf_tgt/nvmf_main.o 00:06:01.156 TEST_HEADER include/spdk/fd.h 00:06:01.156 TEST_HEADER include/spdk/file.h 00:06:01.156 TEST_HEADER include/spdk/fsdev.h 00:06:01.156 TEST_HEADER include/spdk/fsdev_module.h 00:06:01.156 TEST_HEADER include/spdk/ftl.h 00:06:01.156 TEST_HEADER include/spdk/fuse_dispatcher.h 00:06:01.156 TEST_HEADER include/spdk/gpt_spec.h 00:06:01.156 TEST_HEADER include/spdk/hexlify.h 00:06:01.156 TEST_HEADER include/spdk/histogram_data.h 00:06:01.156 TEST_HEADER include/spdk/idxd.h 00:06:01.156 TEST_HEADER include/spdk/idxd_spec.h 00:06:01.156 TEST_HEADER include/spdk/init.h 00:06:01.156 TEST_HEADER include/spdk/ioat.h 00:06:01.156 TEST_HEADER include/spdk/ioat_spec.h 00:06:01.156 TEST_HEADER include/spdk/iscsi_spec.h 00:06:01.156 TEST_HEADER include/spdk/json.h 00:06:01.156 TEST_HEADER include/spdk/jsonrpc.h 00:06:01.156 TEST_HEADER include/spdk/keyring.h 00:06:01.156 CC examples/util/zipf/zipf.o 00:06:01.156 CC test/thread/poller_perf/poller_perf.o 00:06:01.156 TEST_HEADER include/spdk/keyring_module.h 00:06:01.156 TEST_HEADER include/spdk/likely.h 00:06:01.156 TEST_HEADER include/spdk/log.h 00:06:01.156 TEST_HEADER include/spdk/lvol.h 00:06:01.156 TEST_HEADER include/spdk/md5.h 00:06:01.156 CC examples/ioat/perf/perf.o 00:06:01.156 TEST_HEADER include/spdk/memory.h 00:06:01.156 TEST_HEADER include/spdk/mmio.h 00:06:01.156 TEST_HEADER include/spdk/nbd.h 00:06:01.156 TEST_HEADER include/spdk/net.h 00:06:01.156 TEST_HEADER include/spdk/notify.h 00:06:01.156 TEST_HEADER include/spdk/nvme.h 00:06:01.156 TEST_HEADER include/spdk/nvme_intel.h 00:06:01.156 TEST_HEADER include/spdk/nvme_ocssd.h 00:06:01.414 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:06:01.414 TEST_HEADER include/spdk/nvme_spec.h 00:06:01.414 CC test/app/bdev_svc/bdev_svc.o 00:06:01.414 TEST_HEADER include/spdk/nvme_zns.h 00:06:01.414 CC test/dma/test_dma/test_dma.o 00:06:01.414 TEST_HEADER include/spdk/nvmf_cmd.h 00:06:01.414 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:06:01.414 TEST_HEADER include/spdk/nvmf.h 00:06:01.414 TEST_HEADER include/spdk/nvmf_spec.h 00:06:01.414 TEST_HEADER include/spdk/nvmf_transport.h 00:06:01.414 TEST_HEADER include/spdk/opal.h 00:06:01.414 TEST_HEADER include/spdk/opal_spec.h 00:06:01.414 TEST_HEADER include/spdk/pci_ids.h 00:06:01.414 TEST_HEADER include/spdk/pipe.h 00:06:01.414 TEST_HEADER include/spdk/queue.h 00:06:01.414 TEST_HEADER include/spdk/reduce.h 00:06:01.414 TEST_HEADER include/spdk/rpc.h 00:06:01.414 TEST_HEADER include/spdk/scheduler.h 00:06:01.414 TEST_HEADER include/spdk/scsi.h 00:06:01.414 TEST_HEADER include/spdk/scsi_spec.h 00:06:01.414 TEST_HEADER include/spdk/sock.h 00:06:01.414 TEST_HEADER include/spdk/stdinc.h 00:06:01.414 TEST_HEADER include/spdk/string.h 00:06:01.414 TEST_HEADER include/spdk/thread.h 00:06:01.414 TEST_HEADER include/spdk/trace.h 00:06:01.414 TEST_HEADER include/spdk/trace_parser.h 00:06:01.414 TEST_HEADER include/spdk/tree.h 00:06:01.415 TEST_HEADER include/spdk/ublk.h 00:06:01.415 TEST_HEADER include/spdk/util.h 00:06:01.415 TEST_HEADER include/spdk/uuid.h 00:06:01.415 TEST_HEADER include/spdk/version.h 00:06:01.415 TEST_HEADER include/spdk/vfio_user_pci.h 00:06:01.415 TEST_HEADER include/spdk/vfio_user_spec.h 00:06:01.415 TEST_HEADER include/spdk/vhost.h 00:06:01.415 TEST_HEADER include/spdk/vmd.h 00:06:01.415 TEST_HEADER include/spdk/xor.h 00:06:01.415 TEST_HEADER include/spdk/zipf.h 00:06:01.415 CXX test/cpp_headers/accel.o 00:06:01.415 LINK zipf 00:06:01.415 LINK interrupt_tgt 00:06:01.415 LINK poller_perf 00:06:01.415 LINK nvmf_tgt 00:06:01.415 LINK spdk_trace_record 00:06:01.680 LINK ioat_perf 00:06:01.680 LINK bdev_svc 00:06:01.680 CXX test/cpp_headers/accel_module.o 00:06:01.680 LINK spdk_trace 00:06:01.680 CXX test/cpp_headers/assert.o 00:06:01.680 CC examples/ioat/verify/verify.o 00:06:01.963 CXX test/cpp_headers/barrier.o 00:06:01.963 CXX test/cpp_headers/base64.o 00:06:01.963 CC examples/sock/hello_world/hello_sock.o 00:06:01.963 CC examples/vmd/lsvmd/lsvmd.o 00:06:01.963 LINK test_dma 00:06:01.963 CC examples/idxd/perf/perf.o 00:06:01.963 CC examples/thread/thread/thread_ex.o 00:06:01.963 CC app/iscsi_tgt/iscsi_tgt.o 00:06:01.963 LINK verify 00:06:01.963 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:06:01.963 CXX test/cpp_headers/bdev.o 00:06:01.963 LINK lsvmd 00:06:02.221 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:06:02.221 LINK hello_sock 00:06:02.221 LINK thread 00:06:02.221 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:06:02.221 LINK iscsi_tgt 00:06:02.221 LINK idxd_perf 00:06:02.221 CC examples/vmd/led/led.o 00:06:02.221 CXX test/cpp_headers/bdev_module.o 00:06:02.221 CXX test/cpp_headers/bdev_zone.o 00:06:02.221 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:06:02.478 CC app/spdk_tgt/spdk_tgt.o 00:06:02.478 CXX test/cpp_headers/bit_array.o 00:06:02.478 LINK led 00:06:02.478 LINK nvme_fuzz 00:06:02.478 CC app/spdk_lspci/spdk_lspci.o 00:06:02.478 CC app/spdk_nvme_perf/perf.o 00:06:02.478 CC examples/nvme/hello_world/hello_world.o 00:06:02.478 CXX test/cpp_headers/bit_pool.o 00:06:02.736 LINK spdk_tgt 00:06:02.736 CC examples/nvme/reconnect/reconnect.o 00:06:02.736 LINK spdk_lspci 00:06:02.736 CC examples/nvme/nvme_manage/nvme_manage.o 00:06:02.736 CC app/spdk_nvme_identify/identify.o 00:06:02.736 LINK vhost_fuzz 00:06:02.736 CXX test/cpp_headers/blob_bdev.o 00:06:02.736 LINK hello_world 00:06:02.736 CXX test/cpp_headers/blobfs_bdev.o 00:06:02.994 CXX test/cpp_headers/blobfs.o 00:06:02.994 LINK reconnect 00:06:02.994 CC examples/accel/perf/accel_perf.o 00:06:02.994 CC examples/blob/hello_world/hello_blob.o 00:06:03.252 CXX test/cpp_headers/blob.o 00:06:03.252 CC examples/blob/cli/blobcli.o 00:06:03.252 LINK nvme_manage 00:06:03.252 CC examples/fsdev/hello_world/hello_fsdev.o 00:06:03.252 CC examples/nvme/arbitration/arbitration.o 00:06:03.252 CXX test/cpp_headers/conf.o 00:06:03.252 LINK hello_blob 00:06:03.511 CC examples/nvme/hotplug/hotplug.o 00:06:03.511 LINK spdk_nvme_perf 00:06:03.511 LINK hello_fsdev 00:06:03.511 LINK spdk_nvme_identify 00:06:03.511 CXX test/cpp_headers/config.o 00:06:03.511 LINK accel_perf 00:06:03.511 CXX test/cpp_headers/cpuset.o 00:06:03.511 CXX test/cpp_headers/crc16.o 00:06:03.511 LINK arbitration 00:06:03.770 LINK blobcli 00:06:03.770 CC app/spdk_nvme_discover/discovery_aer.o 00:06:03.770 LINK hotplug 00:06:03.770 CXX test/cpp_headers/crc32.o 00:06:03.770 LINK iscsi_fuzz 00:06:03.770 CC app/spdk_top/spdk_top.o 00:06:03.770 CC app/vhost/vhost.o 00:06:03.770 CC app/spdk_dd/spdk_dd.o 00:06:03.770 CC app/fio/nvme/fio_plugin.o 00:06:04.030 CXX test/cpp_headers/crc64.o 00:06:04.030 LINK spdk_nvme_discover 00:06:04.030 CC examples/nvme/cmb_copy/cmb_copy.o 00:06:04.030 CC examples/nvme/abort/abort.o 00:06:04.030 CC examples/bdev/hello_world/hello_bdev.o 00:06:04.030 LINK vhost 00:06:04.030 CC test/app/histogram_perf/histogram_perf.o 00:06:04.030 CXX test/cpp_headers/dif.o 00:06:04.030 LINK cmb_copy 00:06:04.030 CC test/app/jsoncat/jsoncat.o 00:06:04.289 LINK histogram_perf 00:06:04.289 LINK hello_bdev 00:06:04.289 CXX test/cpp_headers/dma.o 00:06:04.289 LINK spdk_dd 00:06:04.289 LINK jsoncat 00:06:04.289 CC app/fio/bdev/fio_plugin.o 00:06:04.289 LINK abort 00:06:04.548 LINK spdk_nvme 00:06:04.548 CC test/app/stub/stub.o 00:06:04.548 CXX test/cpp_headers/endian.o 00:06:04.548 CXX test/cpp_headers/env_dpdk.o 00:06:04.548 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:06:04.548 CC test/env/mem_callbacks/mem_callbacks.o 00:06:04.548 CC examples/bdev/bdevperf/bdevperf.o 00:06:04.548 CC test/env/vtophys/vtophys.o 00:06:04.548 LINK spdk_top 00:06:04.548 LINK stub 00:06:04.806 CC test/event/event_perf/event_perf.o 00:06:04.806 CXX test/cpp_headers/env.o 00:06:04.806 CC test/event/reactor/reactor.o 00:06:04.806 LINK vtophys 00:06:04.806 LINK pmr_persistence 00:06:04.806 LINK event_perf 00:06:04.806 CC test/event/reactor_perf/reactor_perf.o 00:06:04.806 LINK spdk_bdev 00:06:04.806 CXX test/cpp_headers/event.o 00:06:04.806 LINK reactor 00:06:05.066 CXX test/cpp_headers/fd_group.o 00:06:05.066 CXX test/cpp_headers/fd.o 00:06:05.066 CC test/nvme/aer/aer.o 00:06:05.066 CXX test/cpp_headers/file.o 00:06:05.066 CXX test/cpp_headers/fsdev.o 00:06:05.066 LINK reactor_perf 00:06:05.066 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:06:05.325 CXX test/cpp_headers/fsdev_module.o 00:06:05.325 CC test/rpc_client/rpc_client_test.o 00:06:05.325 CC test/env/memory/memory_ut.o 00:06:05.325 CC test/event/app_repeat/app_repeat.o 00:06:05.325 LINK aer 00:06:05.325 LINK env_dpdk_post_init 00:06:05.325 LINK mem_callbacks 00:06:05.325 CC test/accel/dif/dif.o 00:06:05.325 CXX test/cpp_headers/ftl.o 00:06:05.325 LINK rpc_client_test 00:06:05.325 CC test/blobfs/mkfs/mkfs.o 00:06:05.325 LINK bdevperf 00:06:05.584 LINK app_repeat 00:06:05.584 CXX test/cpp_headers/fuse_dispatcher.o 00:06:05.584 CC test/env/pci/pci_ut.o 00:06:05.584 CC test/nvme/reset/reset.o 00:06:05.584 LINK mkfs 00:06:05.584 CXX test/cpp_headers/gpt_spec.o 00:06:05.843 CC test/event/scheduler/scheduler.o 00:06:05.843 CC test/nvme/sgl/sgl.o 00:06:05.843 CC test/lvol/esnap/esnap.o 00:06:05.843 CXX test/cpp_headers/hexlify.o 00:06:05.843 CC examples/nvmf/nvmf/nvmf.o 00:06:05.843 LINK reset 00:06:05.843 CC test/nvme/e2edp/nvme_dp.o 00:06:06.103 LINK scheduler 00:06:06.103 LINK dif 00:06:06.103 CXX test/cpp_headers/histogram_data.o 00:06:06.103 LINK sgl 00:06:06.103 LINK pci_ut 00:06:06.103 CXX test/cpp_headers/idxd.o 00:06:06.103 LINK nvmf 00:06:06.103 CXX test/cpp_headers/idxd_spec.o 00:06:06.103 CC test/nvme/overhead/overhead.o 00:06:06.363 LINK nvme_dp 00:06:06.363 CXX test/cpp_headers/init.o 00:06:06.363 CC test/nvme/err_injection/err_injection.o 00:06:06.363 CXX test/cpp_headers/ioat.o 00:06:06.363 CXX test/cpp_headers/ioat_spec.o 00:06:06.363 CXX test/cpp_headers/iscsi_spec.o 00:06:06.363 CXX test/cpp_headers/json.o 00:06:06.363 LINK memory_ut 00:06:06.622 CC test/nvme/startup/startup.o 00:06:06.622 LINK overhead 00:06:06.622 LINK err_injection 00:06:06.622 CXX test/cpp_headers/jsonrpc.o 00:06:06.622 CC test/bdev/bdevio/bdevio.o 00:06:06.622 CC test/nvme/reserve/reserve.o 00:06:06.622 CC test/nvme/simple_copy/simple_copy.o 00:06:06.622 CXX test/cpp_headers/keyring.o 00:06:06.622 CC test/nvme/connect_stress/connect_stress.o 00:06:06.622 LINK startup 00:06:06.622 CXX test/cpp_headers/keyring_module.o 00:06:06.881 CXX test/cpp_headers/likely.o 00:06:06.881 CC test/nvme/boot_partition/boot_partition.o 00:06:06.881 LINK reserve 00:06:06.881 CXX test/cpp_headers/log.o 00:06:06.881 CXX test/cpp_headers/lvol.o 00:06:06.881 LINK connect_stress 00:06:06.881 LINK simple_copy 00:06:06.881 CC test/nvme/compliance/nvme_compliance.o 00:06:06.881 LINK boot_partition 00:06:07.140 LINK bdevio 00:06:07.140 CXX test/cpp_headers/md5.o 00:06:07.141 CXX test/cpp_headers/memory.o 00:06:07.141 CC test/nvme/fused_ordering/fused_ordering.o 00:06:07.141 CC test/nvme/doorbell_aers/doorbell_aers.o 00:06:07.141 CXX test/cpp_headers/mmio.o 00:06:07.141 CC test/nvme/fdp/fdp.o 00:06:07.141 CC test/nvme/cuse/cuse.o 00:06:07.141 CXX test/cpp_headers/nbd.o 00:06:07.141 CXX test/cpp_headers/net.o 00:06:07.141 CXX test/cpp_headers/notify.o 00:06:07.399 LINK fused_ordering 00:06:07.399 LINK doorbell_aers 00:06:07.399 CXX test/cpp_headers/nvme.o 00:06:07.399 CXX test/cpp_headers/nvme_intel.o 00:06:07.399 LINK nvme_compliance 00:06:07.399 CXX test/cpp_headers/nvme_ocssd.o 00:06:07.399 CXX test/cpp_headers/nvme_ocssd_spec.o 00:06:07.399 CXX test/cpp_headers/nvme_spec.o 00:06:07.399 CXX test/cpp_headers/nvme_zns.o 00:06:07.399 CXX test/cpp_headers/nvmf_cmd.o 00:06:07.399 CXX test/cpp_headers/nvmf_fc_spec.o 00:06:07.399 LINK fdp 00:06:07.399 CXX test/cpp_headers/nvmf.o 00:06:07.658 CXX test/cpp_headers/nvmf_spec.o 00:06:07.658 CXX test/cpp_headers/nvmf_transport.o 00:06:07.658 CXX test/cpp_headers/opal.o 00:06:07.658 CXX test/cpp_headers/opal_spec.o 00:06:07.658 CXX test/cpp_headers/pci_ids.o 00:06:07.658 CXX test/cpp_headers/pipe.o 00:06:07.658 CXX test/cpp_headers/queue.o 00:06:07.659 CXX test/cpp_headers/reduce.o 00:06:07.659 CXX test/cpp_headers/rpc.o 00:06:07.917 CXX test/cpp_headers/scheduler.o 00:06:07.917 CXX test/cpp_headers/scsi.o 00:06:07.917 CXX test/cpp_headers/scsi_spec.o 00:06:07.917 CXX test/cpp_headers/sock.o 00:06:07.917 CXX test/cpp_headers/stdinc.o 00:06:07.917 CXX test/cpp_headers/string.o 00:06:07.917 CXX test/cpp_headers/thread.o 00:06:07.917 CXX test/cpp_headers/trace.o 00:06:07.917 CXX test/cpp_headers/trace_parser.o 00:06:07.917 CXX test/cpp_headers/tree.o 00:06:07.917 CXX test/cpp_headers/ublk.o 00:06:07.917 CXX test/cpp_headers/util.o 00:06:08.176 CXX test/cpp_headers/uuid.o 00:06:08.176 CXX test/cpp_headers/version.o 00:06:08.176 CXX test/cpp_headers/vfio_user_pci.o 00:06:08.176 CXX test/cpp_headers/vfio_user_spec.o 00:06:08.176 CXX test/cpp_headers/vhost.o 00:06:08.176 CXX test/cpp_headers/vmd.o 00:06:08.176 CXX test/cpp_headers/xor.o 00:06:08.176 CXX test/cpp_headers/zipf.o 00:06:08.434 LINK cuse 00:06:10.965 LINK esnap 00:06:11.223 ************************************ 00:06:11.223 END TEST make 00:06:11.223 ************************************ 00:06:11.223 00:06:11.223 real 1m30.575s 00:06:11.223 user 8m13.158s 00:06:11.223 sys 1m38.322s 00:06:11.223 19:26:04 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:06:11.223 19:26:04 make -- common/autotest_common.sh@10 -- $ set +x 00:06:11.223 19:26:04 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:06:11.223 19:26:04 -- pm/common@29 -- $ signal_monitor_resources TERM 00:06:11.223 19:26:04 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:06:11.223 19:26:04 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:11.223 19:26:04 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:06:11.223 19:26:04 -- pm/common@44 -- $ pid=5237 00:06:11.223 19:26:04 -- pm/common@50 -- $ kill -TERM 5237 00:06:11.223 19:26:04 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:11.223 19:26:04 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:06:11.223 19:26:04 -- pm/common@44 -- $ pid=5238 00:06:11.223 19:26:04 -- pm/common@50 -- $ kill -TERM 5238 00:06:11.223 19:26:04 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:06:11.224 19:26:04 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:06:11.485 19:26:04 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:11.485 19:26:04 -- common/autotest_common.sh@1711 -- # lcov --version 00:06:11.485 19:26:04 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:11.485 19:26:04 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:11.485 19:26:04 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:11.485 19:26:04 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:11.485 19:26:04 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:11.485 19:26:04 -- scripts/common.sh@336 -- # IFS=.-: 00:06:11.485 19:26:04 -- scripts/common.sh@336 -- # read -ra ver1 00:06:11.485 19:26:04 -- scripts/common.sh@337 -- # IFS=.-: 00:06:11.485 19:26:04 -- scripts/common.sh@337 -- # read -ra ver2 00:06:11.485 19:26:04 -- scripts/common.sh@338 -- # local 'op=<' 00:06:11.485 19:26:04 -- scripts/common.sh@340 -- # ver1_l=2 00:06:11.485 19:26:04 -- scripts/common.sh@341 -- # ver2_l=1 00:06:11.485 19:26:04 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:11.485 19:26:04 -- scripts/common.sh@344 -- # case "$op" in 00:06:11.485 19:26:04 -- scripts/common.sh@345 -- # : 1 00:06:11.485 19:26:04 -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:11.485 19:26:04 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:11.485 19:26:04 -- scripts/common.sh@365 -- # decimal 1 00:06:11.485 19:26:04 -- scripts/common.sh@353 -- # local d=1 00:06:11.485 19:26:04 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:11.485 19:26:04 -- scripts/common.sh@355 -- # echo 1 00:06:11.485 19:26:04 -- scripts/common.sh@365 -- # ver1[v]=1 00:06:11.485 19:26:04 -- scripts/common.sh@366 -- # decimal 2 00:06:11.485 19:26:04 -- scripts/common.sh@353 -- # local d=2 00:06:11.485 19:26:04 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:11.485 19:26:04 -- scripts/common.sh@355 -- # echo 2 00:06:11.485 19:26:04 -- scripts/common.sh@366 -- # ver2[v]=2 00:06:11.485 19:26:04 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:11.485 19:26:04 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:11.485 19:26:04 -- scripts/common.sh@368 -- # return 0 00:06:11.485 19:26:04 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:11.485 19:26:04 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:11.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.485 --rc genhtml_branch_coverage=1 00:06:11.485 --rc genhtml_function_coverage=1 00:06:11.485 --rc genhtml_legend=1 00:06:11.485 --rc geninfo_all_blocks=1 00:06:11.485 --rc geninfo_unexecuted_blocks=1 00:06:11.485 00:06:11.485 ' 00:06:11.485 19:26:04 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:11.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.485 --rc genhtml_branch_coverage=1 00:06:11.485 --rc genhtml_function_coverage=1 00:06:11.485 --rc genhtml_legend=1 00:06:11.485 --rc geninfo_all_blocks=1 00:06:11.485 --rc geninfo_unexecuted_blocks=1 00:06:11.485 00:06:11.485 ' 00:06:11.485 19:26:04 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:11.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.485 --rc genhtml_branch_coverage=1 00:06:11.485 --rc genhtml_function_coverage=1 00:06:11.485 --rc genhtml_legend=1 00:06:11.485 --rc geninfo_all_blocks=1 00:06:11.485 --rc geninfo_unexecuted_blocks=1 00:06:11.485 00:06:11.485 ' 00:06:11.485 19:26:04 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:11.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.485 --rc genhtml_branch_coverage=1 00:06:11.485 --rc genhtml_function_coverage=1 00:06:11.485 --rc genhtml_legend=1 00:06:11.485 --rc geninfo_all_blocks=1 00:06:11.485 --rc geninfo_unexecuted_blocks=1 00:06:11.485 00:06:11.485 ' 00:06:11.485 19:26:04 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:11.485 19:26:04 -- nvmf/common.sh@7 -- # uname -s 00:06:11.485 19:26:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:11.485 19:26:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:11.485 19:26:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:11.485 19:26:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:11.485 19:26:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:11.485 19:26:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:11.485 19:26:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:11.485 19:26:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:11.485 19:26:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:11.485 19:26:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:11.485 19:26:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:06:11.485 19:26:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:06:11.485 19:26:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:11.485 19:26:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:11.485 19:26:04 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:11.485 19:26:04 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:11.485 19:26:04 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:11.485 19:26:04 -- scripts/common.sh@15 -- # shopt -s extglob 00:06:11.485 19:26:04 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:11.485 19:26:04 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:11.485 19:26:04 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:11.485 19:26:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.485 19:26:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.485 19:26:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.485 19:26:04 -- paths/export.sh@5 -- # export PATH 00:06:11.485 19:26:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.485 19:26:04 -- nvmf/common.sh@51 -- # : 0 00:06:11.485 19:26:04 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:11.485 19:26:04 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:11.485 19:26:04 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:11.485 19:26:04 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:11.485 19:26:04 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:11.485 19:26:04 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:11.485 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:11.485 19:26:04 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:11.485 19:26:04 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:11.485 19:26:04 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:11.485 19:26:04 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:06:11.485 19:26:04 -- spdk/autotest.sh@32 -- # uname -s 00:06:11.485 19:26:04 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:06:11.485 19:26:04 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:06:11.485 19:26:04 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:06:11.485 19:26:04 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:06:11.485 19:26:04 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:06:11.485 19:26:04 -- spdk/autotest.sh@44 -- # modprobe nbd 00:06:11.485 19:26:04 -- spdk/autotest.sh@46 -- # type -P udevadm 00:06:11.485 19:26:04 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:06:11.485 19:26:04 -- spdk/autotest.sh@48 -- # udevadm_pid=54356 00:06:11.485 19:26:04 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:06:11.485 19:26:04 -- pm/common@17 -- # local monitor 00:06:11.485 19:26:04 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:11.485 19:26:04 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:06:11.485 19:26:04 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:11.485 19:26:04 -- pm/common@25 -- # sleep 1 00:06:11.485 19:26:04 -- pm/common@21 -- # date +%s 00:06:11.485 19:26:04 -- pm/common@21 -- # date +%s 00:06:11.485 19:26:04 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733426764 00:06:11.485 19:26:04 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733426764 00:06:11.766 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733426764_collect-vmstat.pm.log 00:06:11.766 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733426764_collect-cpu-load.pm.log 00:06:12.704 19:26:05 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:06:12.704 19:26:05 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:06:12.704 19:26:05 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:12.704 19:26:05 -- common/autotest_common.sh@10 -- # set +x 00:06:12.704 19:26:05 -- spdk/autotest.sh@59 -- # create_test_list 00:06:12.704 19:26:05 -- common/autotest_common.sh@752 -- # xtrace_disable 00:06:12.704 19:26:05 -- common/autotest_common.sh@10 -- # set +x 00:06:12.704 19:26:05 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:06:12.704 19:26:05 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:06:12.704 19:26:05 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:06:12.704 19:26:05 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:06:12.704 19:26:05 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:06:12.704 19:26:05 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:06:12.704 19:26:05 -- common/autotest_common.sh@1457 -- # uname 00:06:12.704 19:26:05 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:06:12.704 19:26:05 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:06:12.704 19:26:05 -- common/autotest_common.sh@1477 -- # uname 00:06:12.704 19:26:05 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:06:12.704 19:26:05 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:06:12.704 19:26:05 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:06:12.704 lcov: LCOV version 1.15 00:06:12.704 19:26:06 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:06:30.897 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:06:30.897 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:06:45.887 19:26:38 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:06:45.887 19:26:38 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:45.887 19:26:38 -- common/autotest_common.sh@10 -- # set +x 00:06:45.887 19:26:38 -- spdk/autotest.sh@78 -- # rm -f 00:06:45.887 19:26:38 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:45.887 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:46.145 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:06:46.145 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:06:46.145 19:26:39 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:06:46.145 19:26:39 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:06:46.145 19:26:39 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:06:46.145 19:26:39 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:06:46.145 19:26:39 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:06:46.145 19:26:39 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:06:46.145 19:26:39 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:06:46.145 19:26:39 -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:06:46.146 19:26:39 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:46.146 19:26:39 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:06:46.146 19:26:39 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:06:46.146 19:26:39 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:46.146 19:26:39 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:46.146 19:26:39 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:06:46.146 19:26:39 -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:06:46.146 19:26:39 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:46.146 19:26:39 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:06:46.146 19:26:39 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:06:46.146 19:26:39 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:06:46.146 19:26:39 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:46.146 19:26:39 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:46.146 19:26:39 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n2 00:06:46.146 19:26:39 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:06:46.146 19:26:39 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:06:46.146 19:26:39 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:46.146 19:26:39 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:46.146 19:26:39 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n3 00:06:46.146 19:26:39 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:06:46.146 19:26:39 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:06:46.146 19:26:39 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:46.146 19:26:39 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:06:46.146 19:26:39 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:46.146 19:26:39 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:46.146 19:26:39 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:06:46.146 19:26:39 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:06:46.146 19:26:39 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:06:46.146 No valid GPT data, bailing 00:06:46.146 19:26:39 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:06:46.146 19:26:39 -- scripts/common.sh@394 -- # pt= 00:06:46.146 19:26:39 -- scripts/common.sh@395 -- # return 1 00:06:46.146 19:26:39 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:06:46.146 1+0 records in 00:06:46.146 1+0 records out 00:06:46.146 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0048697 s, 215 MB/s 00:06:46.146 19:26:39 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:46.146 19:26:39 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:46.146 19:26:39 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:06:46.146 19:26:39 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:06:46.146 19:26:39 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:06:46.146 No valid GPT data, bailing 00:06:46.146 19:26:39 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:06:46.146 19:26:39 -- scripts/common.sh@394 -- # pt= 00:06:46.146 19:26:39 -- scripts/common.sh@395 -- # return 1 00:06:46.146 19:26:39 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:06:46.146 1+0 records in 00:06:46.146 1+0 records out 00:06:46.146 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00501745 s, 209 MB/s 00:06:46.146 19:26:39 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:46.146 19:26:39 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:46.146 19:26:39 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:06:46.146 19:26:39 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:06:46.146 19:26:39 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:06:46.405 No valid GPT data, bailing 00:06:46.405 19:26:39 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:06:46.405 19:26:39 -- scripts/common.sh@394 -- # pt= 00:06:46.405 19:26:39 -- scripts/common.sh@395 -- # return 1 00:06:46.405 19:26:39 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:06:46.405 1+0 records in 00:06:46.405 1+0 records out 00:06:46.405 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00515063 s, 204 MB/s 00:06:46.405 19:26:39 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:46.405 19:26:39 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:46.405 19:26:39 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:06:46.405 19:26:39 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:06:46.405 19:26:39 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:06:46.405 No valid GPT data, bailing 00:06:46.405 19:26:39 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:06:46.405 19:26:39 -- scripts/common.sh@394 -- # pt= 00:06:46.405 19:26:39 -- scripts/common.sh@395 -- # return 1 00:06:46.405 19:26:39 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:06:46.405 1+0 records in 00:06:46.405 1+0 records out 00:06:46.405 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0049914 s, 210 MB/s 00:06:46.405 19:26:39 -- spdk/autotest.sh@105 -- # sync 00:06:46.405 19:26:39 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:06:46.405 19:26:39 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:06:46.405 19:26:39 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:06:48.935 19:26:41 -- spdk/autotest.sh@111 -- # uname -s 00:06:48.935 19:26:41 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:06:48.935 19:26:41 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:06:48.935 19:26:41 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:06:49.193 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:49.194 Hugepages 00:06:49.194 node hugesize free / total 00:06:49.194 node0 1048576kB 0 / 0 00:06:49.194 node0 2048kB 0 / 0 00:06:49.194 00:06:49.194 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:49.194 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:06:49.194 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:06:49.453 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:06:49.453 19:26:42 -- spdk/autotest.sh@117 -- # uname -s 00:06:49.453 19:26:42 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:06:49.453 19:26:42 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:06:49.453 19:26:42 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:50.020 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:50.020 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:50.279 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:50.279 19:26:43 -- common/autotest_common.sh@1517 -- # sleep 1 00:06:51.215 19:26:44 -- common/autotest_common.sh@1518 -- # bdfs=() 00:06:51.215 19:26:44 -- common/autotest_common.sh@1518 -- # local bdfs 00:06:51.215 19:26:44 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:06:51.215 19:26:44 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:06:51.215 19:26:44 -- common/autotest_common.sh@1498 -- # bdfs=() 00:06:51.215 19:26:44 -- common/autotest_common.sh@1498 -- # local bdfs 00:06:51.215 19:26:44 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:51.215 19:26:44 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:51.215 19:26:44 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:06:51.474 19:26:44 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:06:51.474 19:26:44 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:06:51.474 19:26:44 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:51.733 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:51.733 Waiting for block devices as requested 00:06:51.733 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:06:51.733 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:06:51.992 19:26:45 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:06:51.992 19:26:45 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:06:51.992 19:26:45 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:06:51.992 19:26:45 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:06:51.992 19:26:45 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:06:51.992 19:26:45 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:06:51.992 19:26:45 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:06:51.992 19:26:45 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:06:51.992 19:26:45 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:06:51.992 19:26:45 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:06:51.992 19:26:45 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:06:51.992 19:26:45 -- common/autotest_common.sh@1531 -- # grep oacs 00:06:51.992 19:26:45 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:06:51.992 19:26:45 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:06:51.992 19:26:45 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:06:51.992 19:26:45 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:06:51.992 19:26:45 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:06:51.992 19:26:45 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:06:51.992 19:26:45 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:06:51.992 19:26:45 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:06:51.992 19:26:45 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:06:51.992 19:26:45 -- common/autotest_common.sh@1543 -- # continue 00:06:51.992 19:26:45 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:06:51.992 19:26:45 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:06:51.992 19:26:45 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:06:51.992 19:26:45 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:06:51.992 19:26:45 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:06:51.992 19:26:45 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:06:51.992 19:26:45 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:06:51.992 19:26:45 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:06:51.992 19:26:45 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:06:51.992 19:26:45 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:06:51.992 19:26:45 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:06:51.992 19:26:45 -- common/autotest_common.sh@1531 -- # grep oacs 00:06:51.992 19:26:45 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:06:51.992 19:26:45 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:06:51.992 19:26:45 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:06:51.992 19:26:45 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:06:51.992 19:26:45 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:06:51.992 19:26:45 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:06:51.992 19:26:45 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:06:51.992 19:26:45 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:06:51.992 19:26:45 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:06:51.992 19:26:45 -- common/autotest_common.sh@1543 -- # continue 00:06:51.992 19:26:45 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:06:51.992 19:26:45 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:51.992 19:26:45 -- common/autotest_common.sh@10 -- # set +x 00:06:51.992 19:26:45 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:06:51.992 19:26:45 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:51.992 19:26:45 -- common/autotest_common.sh@10 -- # set +x 00:06:51.992 19:26:45 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:52.929 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:52.929 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:52.929 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:52.929 19:26:46 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:06:52.929 19:26:46 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:52.929 19:26:46 -- common/autotest_common.sh@10 -- # set +x 00:06:52.929 19:26:46 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:06:52.929 19:26:46 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:06:52.929 19:26:46 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:06:52.929 19:26:46 -- common/autotest_common.sh@1563 -- # bdfs=() 00:06:52.929 19:26:46 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:06:52.929 19:26:46 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:06:52.929 19:26:46 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:06:52.929 19:26:46 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:06:52.929 19:26:46 -- common/autotest_common.sh@1498 -- # bdfs=() 00:06:52.929 19:26:46 -- common/autotest_common.sh@1498 -- # local bdfs 00:06:52.929 19:26:46 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:52.929 19:26:46 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:52.929 19:26:46 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:06:52.929 19:26:46 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:06:52.929 19:26:46 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:06:52.929 19:26:46 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:06:52.929 19:26:46 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:06:52.929 19:26:46 -- common/autotest_common.sh@1566 -- # device=0x0010 00:06:52.929 19:26:46 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:52.929 19:26:46 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:06:52.929 19:26:46 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:06:52.929 19:26:46 -- common/autotest_common.sh@1566 -- # device=0x0010 00:06:52.929 19:26:46 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:52.929 19:26:46 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:06:52.929 19:26:46 -- common/autotest_common.sh@1572 -- # return 0 00:06:52.929 19:26:46 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:06:52.929 19:26:46 -- common/autotest_common.sh@1580 -- # return 0 00:06:52.929 19:26:46 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:06:52.929 19:26:46 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:06:52.929 19:26:46 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:52.929 19:26:46 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:52.929 19:26:46 -- spdk/autotest.sh@149 -- # timing_enter lib 00:06:52.929 19:26:46 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:52.929 19:26:46 -- common/autotest_common.sh@10 -- # set +x 00:06:52.929 19:26:46 -- spdk/autotest.sh@151 -- # [[ 1 -eq 1 ]] 00:06:52.929 19:26:46 -- spdk/autotest.sh@152 -- # export SPDK_SOCK_IMPL_DEFAULT=uring 00:06:52.929 19:26:46 -- spdk/autotest.sh@152 -- # SPDK_SOCK_IMPL_DEFAULT=uring 00:06:52.929 19:26:46 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:52.929 19:26:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:52.929 19:26:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:52.929 19:26:46 -- common/autotest_common.sh@10 -- # set +x 00:06:52.929 ************************************ 00:06:52.929 START TEST env 00:06:52.929 ************************************ 00:06:52.929 19:26:46 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:53.188 * Looking for test storage... 00:06:53.188 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:06:53.188 19:26:46 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:53.188 19:26:46 env -- common/autotest_common.sh@1711 -- # lcov --version 00:06:53.188 19:26:46 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:53.188 19:26:46 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:53.188 19:26:46 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:53.188 19:26:46 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:53.188 19:26:46 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:53.188 19:26:46 env -- scripts/common.sh@336 -- # IFS=.-: 00:06:53.188 19:26:46 env -- scripts/common.sh@336 -- # read -ra ver1 00:06:53.188 19:26:46 env -- scripts/common.sh@337 -- # IFS=.-: 00:06:53.188 19:26:46 env -- scripts/common.sh@337 -- # read -ra ver2 00:06:53.188 19:26:46 env -- scripts/common.sh@338 -- # local 'op=<' 00:06:53.188 19:26:46 env -- scripts/common.sh@340 -- # ver1_l=2 00:06:53.188 19:26:46 env -- scripts/common.sh@341 -- # ver2_l=1 00:06:53.189 19:26:46 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:53.189 19:26:46 env -- scripts/common.sh@344 -- # case "$op" in 00:06:53.189 19:26:46 env -- scripts/common.sh@345 -- # : 1 00:06:53.189 19:26:46 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:53.189 19:26:46 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:53.189 19:26:46 env -- scripts/common.sh@365 -- # decimal 1 00:06:53.189 19:26:46 env -- scripts/common.sh@353 -- # local d=1 00:06:53.189 19:26:46 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:53.189 19:26:46 env -- scripts/common.sh@355 -- # echo 1 00:06:53.189 19:26:46 env -- scripts/common.sh@365 -- # ver1[v]=1 00:06:53.189 19:26:46 env -- scripts/common.sh@366 -- # decimal 2 00:06:53.189 19:26:46 env -- scripts/common.sh@353 -- # local d=2 00:06:53.189 19:26:46 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:53.189 19:26:46 env -- scripts/common.sh@355 -- # echo 2 00:06:53.189 19:26:46 env -- scripts/common.sh@366 -- # ver2[v]=2 00:06:53.189 19:26:46 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:53.189 19:26:46 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:53.189 19:26:46 env -- scripts/common.sh@368 -- # return 0 00:06:53.189 19:26:46 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:53.189 19:26:46 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:53.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.189 --rc genhtml_branch_coverage=1 00:06:53.189 --rc genhtml_function_coverage=1 00:06:53.189 --rc genhtml_legend=1 00:06:53.189 --rc geninfo_all_blocks=1 00:06:53.189 --rc geninfo_unexecuted_blocks=1 00:06:53.189 00:06:53.189 ' 00:06:53.189 19:26:46 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:53.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.189 --rc genhtml_branch_coverage=1 00:06:53.189 --rc genhtml_function_coverage=1 00:06:53.189 --rc genhtml_legend=1 00:06:53.189 --rc geninfo_all_blocks=1 00:06:53.189 --rc geninfo_unexecuted_blocks=1 00:06:53.189 00:06:53.189 ' 00:06:53.189 19:26:46 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:53.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.189 --rc genhtml_branch_coverage=1 00:06:53.189 --rc genhtml_function_coverage=1 00:06:53.189 --rc genhtml_legend=1 00:06:53.189 --rc geninfo_all_blocks=1 00:06:53.189 --rc geninfo_unexecuted_blocks=1 00:06:53.189 00:06:53.189 ' 00:06:53.189 19:26:46 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:53.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.189 --rc genhtml_branch_coverage=1 00:06:53.189 --rc genhtml_function_coverage=1 00:06:53.189 --rc genhtml_legend=1 00:06:53.189 --rc geninfo_all_blocks=1 00:06:53.189 --rc geninfo_unexecuted_blocks=1 00:06:53.189 00:06:53.189 ' 00:06:53.189 19:26:46 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:53.189 19:26:46 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:53.189 19:26:46 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:53.189 19:26:46 env -- common/autotest_common.sh@10 -- # set +x 00:06:53.189 ************************************ 00:06:53.189 START TEST env_memory 00:06:53.189 ************************************ 00:06:53.189 19:26:46 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:53.189 00:06:53.189 00:06:53.189 CUnit - A unit testing framework for C - Version 2.1-3 00:06:53.189 http://cunit.sourceforge.net/ 00:06:53.189 00:06:53.189 00:06:53.189 Suite: memory 00:06:53.189 Test: alloc and free memory map ...[2024-12-05 19:26:46.605437] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:53.189 passed 00:06:53.449 Test: mem map translation ...[2024-12-05 19:26:46.636858] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:53.449 [2024-12-05 19:26:46.636907] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:53.449 [2024-12-05 19:26:46.636963] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:53.449 [2024-12-05 19:26:46.636974] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:53.449 passed 00:06:53.449 Test: mem map registration ...[2024-12-05 19:26:46.700572] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:06:53.449 [2024-12-05 19:26:46.700599] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:06:53.449 passed 00:06:53.449 Test: mem map adjacent registrations ...passed 00:06:53.449 00:06:53.449 Run Summary: Type Total Ran Passed Failed Inactive 00:06:53.449 suites 1 1 n/a 0 0 00:06:53.449 tests 4 4 4 0 0 00:06:53.449 asserts 152 152 152 0 n/a 00:06:53.449 00:06:53.449 Elapsed time = 0.214 seconds 00:06:53.449 00:06:53.449 real 0m0.230s 00:06:53.449 user 0m0.213s 00:06:53.449 sys 0m0.014s 00:06:53.449 19:26:46 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:53.449 19:26:46 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:06:53.449 ************************************ 00:06:53.449 END TEST env_memory 00:06:53.449 ************************************ 00:06:53.449 19:26:46 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:53.449 19:26:46 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:53.449 19:26:46 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:53.449 19:26:46 env -- common/autotest_common.sh@10 -- # set +x 00:06:53.449 ************************************ 00:06:53.449 START TEST env_vtophys 00:06:53.449 ************************************ 00:06:53.449 19:26:46 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:53.449 EAL: lib.eal log level changed from notice to debug 00:06:53.449 EAL: Detected lcore 0 as core 0 on socket 0 00:06:53.449 EAL: Detected lcore 1 as core 0 on socket 0 00:06:53.449 EAL: Detected lcore 2 as core 0 on socket 0 00:06:53.449 EAL: Detected lcore 3 as core 0 on socket 0 00:06:53.449 EAL: Detected lcore 4 as core 0 on socket 0 00:06:53.449 EAL: Detected lcore 5 as core 0 on socket 0 00:06:53.449 EAL: Detected lcore 6 as core 0 on socket 0 00:06:53.449 EAL: Detected lcore 7 as core 0 on socket 0 00:06:53.449 EAL: Detected lcore 8 as core 0 on socket 0 00:06:53.449 EAL: Detected lcore 9 as core 0 on socket 0 00:06:53.449 EAL: Maximum logical cores by configuration: 128 00:06:53.449 EAL: Detected CPU lcores: 10 00:06:53.449 EAL: Detected NUMA nodes: 1 00:06:53.449 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:06:53.449 EAL: Detected shared linkage of DPDK 00:06:53.449 EAL: No shared files mode enabled, IPC will be disabled 00:06:53.449 EAL: Selected IOVA mode 'PA' 00:06:53.449 EAL: Probing VFIO support... 00:06:53.449 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:53.449 EAL: VFIO modules not loaded, skipping VFIO support... 00:06:53.449 EAL: Ask a virtual area of 0x2e000 bytes 00:06:53.449 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:53.449 EAL: Setting up physically contiguous memory... 00:06:53.449 EAL: Setting maximum number of open files to 524288 00:06:53.449 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:53.449 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:53.449 EAL: Ask a virtual area of 0x61000 bytes 00:06:53.449 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:53.449 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:53.449 EAL: Ask a virtual area of 0x400000000 bytes 00:06:53.449 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:53.449 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:53.449 EAL: Ask a virtual area of 0x61000 bytes 00:06:53.449 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:53.449 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:53.449 EAL: Ask a virtual area of 0x400000000 bytes 00:06:53.449 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:53.449 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:53.449 EAL: Ask a virtual area of 0x61000 bytes 00:06:53.449 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:53.449 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:53.449 EAL: Ask a virtual area of 0x400000000 bytes 00:06:53.449 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:53.449 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:53.449 EAL: Ask a virtual area of 0x61000 bytes 00:06:53.449 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:53.449 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:53.449 EAL: Ask a virtual area of 0x400000000 bytes 00:06:53.449 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:53.449 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:53.449 EAL: Hugepages will be freed exactly as allocated. 00:06:53.449 EAL: No shared files mode enabled, IPC is disabled 00:06:53.449 EAL: No shared files mode enabled, IPC is disabled 00:06:53.709 EAL: TSC frequency is ~2200000 KHz 00:06:53.709 EAL: Main lcore 0 is ready (tid=7f89eb095a00;cpuset=[0]) 00:06:53.709 EAL: Trying to obtain current memory policy. 00:06:53.709 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:53.709 EAL: Restoring previous memory policy: 0 00:06:53.709 EAL: request: mp_malloc_sync 00:06:53.709 EAL: No shared files mode enabled, IPC is disabled 00:06:53.709 EAL: Heap on socket 0 was expanded by 2MB 00:06:53.709 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:53.709 EAL: No PCI address specified using 'addr=' in: bus=pci 00:06:53.709 EAL: Mem event callback 'spdk:(nil)' registered 00:06:53.709 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:06:53.709 00:06:53.709 00:06:53.709 CUnit - A unit testing framework for C - Version 2.1-3 00:06:53.709 http://cunit.sourceforge.net/ 00:06:53.709 00:06:53.709 00:06:53.709 Suite: components_suite 00:06:53.709 Test: vtophys_malloc_test ...passed 00:06:53.709 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:53.709 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:53.709 EAL: Restoring previous memory policy: 4 00:06:53.709 EAL: Calling mem event callback 'spdk:(nil)' 00:06:53.709 EAL: request: mp_malloc_sync 00:06:53.709 EAL: No shared files mode enabled, IPC is disabled 00:06:53.709 EAL: Heap on socket 0 was expanded by 4MB 00:06:53.709 EAL: Calling mem event callback 'spdk:(nil)' 00:06:53.709 EAL: request: mp_malloc_sync 00:06:53.709 EAL: No shared files mode enabled, IPC is disabled 00:06:53.709 EAL: Heap on socket 0 was shrunk by 4MB 00:06:53.709 EAL: Trying to obtain current memory policy. 00:06:53.709 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:53.709 EAL: Restoring previous memory policy: 4 00:06:53.709 EAL: Calling mem event callback 'spdk:(nil)' 00:06:53.709 EAL: request: mp_malloc_sync 00:06:53.709 EAL: No shared files mode enabled, IPC is disabled 00:06:53.709 EAL: Heap on socket 0 was expanded by 6MB 00:06:53.709 EAL: Calling mem event callback 'spdk:(nil)' 00:06:53.709 EAL: request: mp_malloc_sync 00:06:53.709 EAL: No shared files mode enabled, IPC is disabled 00:06:53.709 EAL: Heap on socket 0 was shrunk by 6MB 00:06:53.709 EAL: Trying to obtain current memory policy. 00:06:53.709 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:53.709 EAL: Restoring previous memory policy: 4 00:06:53.709 EAL: Calling mem event callback 'spdk:(nil)' 00:06:53.709 EAL: request: mp_malloc_sync 00:06:53.709 EAL: No shared files mode enabled, IPC is disabled 00:06:53.709 EAL: Heap on socket 0 was expanded by 10MB 00:06:53.709 EAL: Calling mem event callback 'spdk:(nil)' 00:06:53.709 EAL: request: mp_malloc_sync 00:06:53.709 EAL: No shared files mode enabled, IPC is disabled 00:06:53.709 EAL: Heap on socket 0 was shrunk by 10MB 00:06:53.709 EAL: Trying to obtain current memory policy. 00:06:53.709 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:53.709 EAL: Restoring previous memory policy: 4 00:06:53.709 EAL: Calling mem event callback 'spdk:(nil)' 00:06:53.709 EAL: request: mp_malloc_sync 00:06:53.709 EAL: No shared files mode enabled, IPC is disabled 00:06:53.709 EAL: Heap on socket 0 was expanded by 18MB 00:06:53.709 EAL: Calling mem event callback 'spdk:(nil)' 00:06:53.709 EAL: request: mp_malloc_sync 00:06:53.709 EAL: No shared files mode enabled, IPC is disabled 00:06:53.709 EAL: Heap on socket 0 was shrunk by 18MB 00:06:53.709 EAL: Trying to obtain current memory policy. 00:06:53.709 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:53.709 EAL: Restoring previous memory policy: 4 00:06:53.709 EAL: Calling mem event callback 'spdk:(nil)' 00:06:53.709 EAL: request: mp_malloc_sync 00:06:53.709 EAL: No shared files mode enabled, IPC is disabled 00:06:53.709 EAL: Heap on socket 0 was expanded by 34MB 00:06:53.709 EAL: Calling mem event callback 'spdk:(nil)' 00:06:53.709 EAL: request: mp_malloc_sync 00:06:53.709 EAL: No shared files mode enabled, IPC is disabled 00:06:53.709 EAL: Heap on socket 0 was shrunk by 34MB 00:06:53.709 EAL: Trying to obtain current memory policy. 00:06:53.709 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:53.709 EAL: Restoring previous memory policy: 4 00:06:53.709 EAL: Calling mem event callback 'spdk:(nil)' 00:06:53.709 EAL: request: mp_malloc_sync 00:06:53.709 EAL: No shared files mode enabled, IPC is disabled 00:06:53.709 EAL: Heap on socket 0 was expanded by 66MB 00:06:53.709 EAL: Calling mem event callback 'spdk:(nil)' 00:06:53.709 EAL: request: mp_malloc_sync 00:06:53.709 EAL: No shared files mode enabled, IPC is disabled 00:06:53.709 EAL: Heap on socket 0 was shrunk by 66MB 00:06:53.710 EAL: Trying to obtain current memory policy. 00:06:53.710 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:53.710 EAL: Restoring previous memory policy: 4 00:06:53.710 EAL: Calling mem event callback 'spdk:(nil)' 00:06:53.710 EAL: request: mp_malloc_sync 00:06:53.710 EAL: No shared files mode enabled, IPC is disabled 00:06:53.710 EAL: Heap on socket 0 was expanded by 130MB 00:06:53.710 EAL: Calling mem event callback 'spdk:(nil)' 00:06:53.968 EAL: request: mp_malloc_sync 00:06:53.968 EAL: No shared files mode enabled, IPC is disabled 00:06:53.968 EAL: Heap on socket 0 was shrunk by 130MB 00:06:53.968 EAL: Trying to obtain current memory policy. 00:06:53.968 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:53.968 EAL: Restoring previous memory policy: 4 00:06:53.968 EAL: Calling mem event callback 'spdk:(nil)' 00:06:53.968 EAL: request: mp_malloc_sync 00:06:53.968 EAL: No shared files mode enabled, IPC is disabled 00:06:53.968 EAL: Heap on socket 0 was expanded by 258MB 00:06:53.968 EAL: Calling mem event callback 'spdk:(nil)' 00:06:53.968 EAL: request: mp_malloc_sync 00:06:53.968 EAL: No shared files mode enabled, IPC is disabled 00:06:53.968 EAL: Heap on socket 0 was shrunk by 258MB 00:06:53.968 EAL: Trying to obtain current memory policy. 00:06:53.968 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:54.227 EAL: Restoring previous memory policy: 4 00:06:54.227 EAL: Calling mem event callback 'spdk:(nil)' 00:06:54.228 EAL: request: mp_malloc_sync 00:06:54.228 EAL: No shared files mode enabled, IPC is disabled 00:06:54.228 EAL: Heap on socket 0 was expanded by 514MB 00:06:54.228 EAL: Calling mem event callback 'spdk:(nil)' 00:06:54.228 EAL: request: mp_malloc_sync 00:06:54.228 EAL: No shared files mode enabled, IPC is disabled 00:06:54.228 EAL: Heap on socket 0 was shrunk by 514MB 00:06:54.228 EAL: Trying to obtain current memory policy. 00:06:54.228 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:54.487 EAL: Restoring previous memory policy: 4 00:06:54.487 EAL: Calling mem event callback 'spdk:(nil)' 00:06:54.487 EAL: request: mp_malloc_sync 00:06:54.487 EAL: No shared files mode enabled, IPC is disabled 00:06:54.487 EAL: Heap on socket 0 was expanded by 1026MB 00:06:54.746 EAL: Calling mem event callback 'spdk:(nil)' 00:06:55.004 passed 00:06:55.004 00:06:55.004 Run Summary: Type Total Ran Passed Failed Inactive 00:06:55.004 suites 1 1 n/a 0 0 00:06:55.004 tests 2 2 2 0 0 00:06:55.004 asserts 5463 5463 5463 0 n/a 00:06:55.004 00:06:55.004 Elapsed time = 1.291 seconds 00:06:55.004 EAL: request: mp_malloc_sync 00:06:55.004 EAL: No shared files mode enabled, IPC is disabled 00:06:55.004 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:55.004 EAL: Calling mem event callback 'spdk:(nil)' 00:06:55.004 EAL: request: mp_malloc_sync 00:06:55.004 EAL: No shared files mode enabled, IPC is disabled 00:06:55.004 EAL: Heap on socket 0 was shrunk by 2MB 00:06:55.004 EAL: No shared files mode enabled, IPC is disabled 00:06:55.004 EAL: No shared files mode enabled, IPC is disabled 00:06:55.004 EAL: No shared files mode enabled, IPC is disabled 00:06:55.004 00:06:55.004 real 0m1.505s 00:06:55.004 user 0m0.855s 00:06:55.004 sys 0m0.513s 00:06:55.004 19:26:48 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:55.004 19:26:48 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:06:55.004 ************************************ 00:06:55.004 END TEST env_vtophys 00:06:55.004 ************************************ 00:06:55.004 19:26:48 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:55.004 19:26:48 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:55.004 19:26:48 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:55.004 19:26:48 env -- common/autotest_common.sh@10 -- # set +x 00:06:55.004 ************************************ 00:06:55.004 START TEST env_pci 00:06:55.004 ************************************ 00:06:55.004 19:26:48 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:55.004 00:06:55.004 00:06:55.004 CUnit - A unit testing framework for C - Version 2.1-3 00:06:55.004 http://cunit.sourceforge.net/ 00:06:55.004 00:06:55.004 00:06:55.004 Suite: pci 00:06:55.004 Test: pci_hook ...[2024-12-05 19:26:48.414449] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56594 has claimed it 00:06:55.004 passed 00:06:55.004 00:06:55.004 Run Summary: Type Total Ran Passed Failed Inactive 00:06:55.004 suites 1 1 n/a 0 0 00:06:55.004 tests 1 1 1 0 0 00:06:55.004 asserts 25 25 25 0 n/a 00:06:55.004 00:06:55.004 Elapsed time = 0.004 seconds 00:06:55.004 EAL: Cannot find device (10000:00:01.0) 00:06:55.004 EAL: Failed to attach device on primary process 00:06:55.004 00:06:55.004 real 0m0.024s 00:06:55.004 user 0m0.010s 00:06:55.004 sys 0m0.014s 00:06:55.004 19:26:48 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:55.004 19:26:48 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:06:55.004 ************************************ 00:06:55.004 END TEST env_pci 00:06:55.005 ************************************ 00:06:55.270 19:26:48 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:55.270 19:26:48 env -- env/env.sh@15 -- # uname 00:06:55.270 19:26:48 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:55.270 19:26:48 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:55.270 19:26:48 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:55.270 19:26:48 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:55.270 19:26:48 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:55.270 19:26:48 env -- common/autotest_common.sh@10 -- # set +x 00:06:55.270 ************************************ 00:06:55.270 START TEST env_dpdk_post_init 00:06:55.270 ************************************ 00:06:55.270 19:26:48 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:55.270 EAL: Detected CPU lcores: 10 00:06:55.270 EAL: Detected NUMA nodes: 1 00:06:55.270 EAL: Detected shared linkage of DPDK 00:06:55.270 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:55.270 EAL: Selected IOVA mode 'PA' 00:06:55.270 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:55.270 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:06:55.270 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:06:55.270 Starting DPDK initialization... 00:06:55.270 Starting SPDK post initialization... 00:06:55.270 SPDK NVMe probe 00:06:55.270 Attaching to 0000:00:10.0 00:06:55.270 Attaching to 0000:00:11.0 00:06:55.270 Attached to 0000:00:10.0 00:06:55.270 Attached to 0000:00:11.0 00:06:55.270 Cleaning up... 00:06:55.270 00:06:55.270 real 0m0.180s 00:06:55.270 user 0m0.043s 00:06:55.270 sys 0m0.037s 00:06:55.270 19:26:48 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:55.270 19:26:48 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:06:55.270 ************************************ 00:06:55.270 END TEST env_dpdk_post_init 00:06:55.270 ************************************ 00:06:55.543 19:26:48 env -- env/env.sh@26 -- # uname 00:06:55.543 19:26:48 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:55.543 19:26:48 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:55.543 19:26:48 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:55.543 19:26:48 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:55.543 19:26:48 env -- common/autotest_common.sh@10 -- # set +x 00:06:55.543 ************************************ 00:06:55.543 START TEST env_mem_callbacks 00:06:55.543 ************************************ 00:06:55.543 19:26:48 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:55.543 EAL: Detected CPU lcores: 10 00:06:55.543 EAL: Detected NUMA nodes: 1 00:06:55.543 EAL: Detected shared linkage of DPDK 00:06:55.543 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:55.543 EAL: Selected IOVA mode 'PA' 00:06:55.543 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:55.543 00:06:55.543 00:06:55.543 CUnit - A unit testing framework for C - Version 2.1-3 00:06:55.543 http://cunit.sourceforge.net/ 00:06:55.543 00:06:55.543 00:06:55.543 Suite: memory 00:06:55.543 Test: test ... 00:06:55.543 register 0x200000200000 2097152 00:06:55.543 malloc 3145728 00:06:55.543 register 0x200000400000 4194304 00:06:55.543 buf 0x200000500000 len 3145728 PASSED 00:06:55.543 malloc 64 00:06:55.543 buf 0x2000004fff40 len 64 PASSED 00:06:55.543 malloc 4194304 00:06:55.543 register 0x200000800000 6291456 00:06:55.543 buf 0x200000a00000 len 4194304 PASSED 00:06:55.543 free 0x200000500000 3145728 00:06:55.543 free 0x2000004fff40 64 00:06:55.543 unregister 0x200000400000 4194304 PASSED 00:06:55.543 free 0x200000a00000 4194304 00:06:55.543 unregister 0x200000800000 6291456 PASSED 00:06:55.543 malloc 8388608 00:06:55.543 register 0x200000400000 10485760 00:06:55.543 buf 0x200000600000 len 8388608 PASSED 00:06:55.543 free 0x200000600000 8388608 00:06:55.543 unregister 0x200000400000 10485760 PASSED 00:06:55.543 passed 00:06:55.543 00:06:55.543 Run Summary: Type Total Ran Passed Failed Inactive 00:06:55.543 suites 1 1 n/a 0 0 00:06:55.543 tests 1 1 1 0 0 00:06:55.543 asserts 15 15 15 0 n/a 00:06:55.543 00:06:55.543 Elapsed time = 0.007 seconds 00:06:55.543 00:06:55.543 real 0m0.139s 00:06:55.543 user 0m0.014s 00:06:55.543 sys 0m0.024s 00:06:55.543 19:26:48 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:55.543 19:26:48 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:06:55.543 ************************************ 00:06:55.543 END TEST env_mem_callbacks 00:06:55.543 ************************************ 00:06:55.543 00:06:55.543 real 0m2.554s 00:06:55.543 user 0m1.357s 00:06:55.543 sys 0m0.843s 00:06:55.543 19:26:48 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:55.543 19:26:48 env -- common/autotest_common.sh@10 -- # set +x 00:06:55.543 ************************************ 00:06:55.543 END TEST env 00:06:55.543 ************************************ 00:06:55.543 19:26:48 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:55.543 19:26:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:55.543 19:26:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:55.543 19:26:48 -- common/autotest_common.sh@10 -- # set +x 00:06:55.543 ************************************ 00:06:55.543 START TEST rpc 00:06:55.543 ************************************ 00:06:55.543 19:26:48 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:55.802 * Looking for test storage... 00:06:55.802 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:55.802 19:26:49 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:55.802 19:26:49 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:55.802 19:26:49 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:06:55.802 19:26:49 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:55.802 19:26:49 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:55.802 19:26:49 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:55.802 19:26:49 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:55.802 19:26:49 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:55.802 19:26:49 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:55.802 19:26:49 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:55.802 19:26:49 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:55.802 19:26:49 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:55.802 19:26:49 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:55.802 19:26:49 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:55.802 19:26:49 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:55.802 19:26:49 rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:55.802 19:26:49 rpc -- scripts/common.sh@345 -- # : 1 00:06:55.802 19:26:49 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:55.802 19:26:49 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:55.802 19:26:49 rpc -- scripts/common.sh@365 -- # decimal 1 00:06:55.802 19:26:49 rpc -- scripts/common.sh@353 -- # local d=1 00:06:55.802 19:26:49 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:55.802 19:26:49 rpc -- scripts/common.sh@355 -- # echo 1 00:06:55.802 19:26:49 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:55.802 19:26:49 rpc -- scripts/common.sh@366 -- # decimal 2 00:06:55.802 19:26:49 rpc -- scripts/common.sh@353 -- # local d=2 00:06:55.802 19:26:49 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:55.802 19:26:49 rpc -- scripts/common.sh@355 -- # echo 2 00:06:55.802 19:26:49 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:55.802 19:26:49 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:55.802 19:26:49 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:55.802 19:26:49 rpc -- scripts/common.sh@368 -- # return 0 00:06:55.802 19:26:49 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:55.802 19:26:49 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:55.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.802 --rc genhtml_branch_coverage=1 00:06:55.802 --rc genhtml_function_coverage=1 00:06:55.802 --rc genhtml_legend=1 00:06:55.802 --rc geninfo_all_blocks=1 00:06:55.802 --rc geninfo_unexecuted_blocks=1 00:06:55.802 00:06:55.802 ' 00:06:55.802 19:26:49 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:55.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.802 --rc genhtml_branch_coverage=1 00:06:55.802 --rc genhtml_function_coverage=1 00:06:55.802 --rc genhtml_legend=1 00:06:55.802 --rc geninfo_all_blocks=1 00:06:55.802 --rc geninfo_unexecuted_blocks=1 00:06:55.802 00:06:55.802 ' 00:06:55.802 19:26:49 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:55.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.802 --rc genhtml_branch_coverage=1 00:06:55.802 --rc genhtml_function_coverage=1 00:06:55.802 --rc genhtml_legend=1 00:06:55.802 --rc geninfo_all_blocks=1 00:06:55.802 --rc geninfo_unexecuted_blocks=1 00:06:55.802 00:06:55.802 ' 00:06:55.802 19:26:49 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:55.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.802 --rc genhtml_branch_coverage=1 00:06:55.802 --rc genhtml_function_coverage=1 00:06:55.802 --rc genhtml_legend=1 00:06:55.802 --rc geninfo_all_blocks=1 00:06:55.802 --rc geninfo_unexecuted_blocks=1 00:06:55.802 00:06:55.802 ' 00:06:55.802 19:26:49 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56712 00:06:55.802 19:26:49 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:55.802 19:26:49 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56712 00:06:55.802 19:26:49 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:06:55.802 19:26:49 rpc -- common/autotest_common.sh@835 -- # '[' -z 56712 ']' 00:06:55.802 19:26:49 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:55.802 19:26:49 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:55.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:55.802 19:26:49 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:55.802 19:26:49 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:55.802 19:26:49 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:55.802 [2024-12-05 19:26:49.220089] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:06:55.802 [2024-12-05 19:26:49.220259] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56712 ] 00:06:56.061 [2024-12-05 19:26:49.367436] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.061 [2024-12-05 19:26:49.425390] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:56.061 [2024-12-05 19:26:49.425504] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56712' to capture a snapshot of events at runtime. 00:06:56.061 [2024-12-05 19:26:49.425517] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:56.061 [2024-12-05 19:26:49.425525] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:56.061 [2024-12-05 19:26:49.425547] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56712 for offline analysis/debug. 00:06:56.061 [2024-12-05 19:26:49.426024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.321 [2024-12-05 19:26:49.500452] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:56.321 19:26:49 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:56.321 19:26:49 rpc -- common/autotest_common.sh@868 -- # return 0 00:06:56.321 19:26:49 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:56.321 19:26:49 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:56.321 19:26:49 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:56.321 19:26:49 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:56.321 19:26:49 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:56.321 19:26:49 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:56.321 19:26:49 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:56.321 ************************************ 00:06:56.321 START TEST rpc_integrity 00:06:56.321 ************************************ 00:06:56.321 19:26:49 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:06:56.321 19:26:49 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:56.321 19:26:49 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.321 19:26:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:56.321 19:26:49 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.321 19:26:49 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:56.321 19:26:49 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:56.581 19:26:49 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:56.581 19:26:49 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:56.581 19:26:49 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.581 19:26:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:56.581 19:26:49 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.581 19:26:49 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:56.581 19:26:49 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:56.581 19:26:49 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.581 19:26:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:56.581 19:26:49 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.581 19:26:49 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:56.581 { 00:06:56.581 "name": "Malloc0", 00:06:56.581 "aliases": [ 00:06:56.581 "d2a9bc4e-a8c8-4811-a4ee-a9636e87bde0" 00:06:56.581 ], 00:06:56.581 "product_name": "Malloc disk", 00:06:56.581 "block_size": 512, 00:06:56.581 "num_blocks": 16384, 00:06:56.581 "uuid": "d2a9bc4e-a8c8-4811-a4ee-a9636e87bde0", 00:06:56.581 "assigned_rate_limits": { 00:06:56.581 "rw_ios_per_sec": 0, 00:06:56.581 "rw_mbytes_per_sec": 0, 00:06:56.581 "r_mbytes_per_sec": 0, 00:06:56.581 "w_mbytes_per_sec": 0 00:06:56.581 }, 00:06:56.581 "claimed": false, 00:06:56.581 "zoned": false, 00:06:56.581 "supported_io_types": { 00:06:56.581 "read": true, 00:06:56.581 "write": true, 00:06:56.581 "unmap": true, 00:06:56.581 "flush": true, 00:06:56.581 "reset": true, 00:06:56.581 "nvme_admin": false, 00:06:56.581 "nvme_io": false, 00:06:56.581 "nvme_io_md": false, 00:06:56.581 "write_zeroes": true, 00:06:56.581 "zcopy": true, 00:06:56.581 "get_zone_info": false, 00:06:56.581 "zone_management": false, 00:06:56.581 "zone_append": false, 00:06:56.581 "compare": false, 00:06:56.581 "compare_and_write": false, 00:06:56.581 "abort": true, 00:06:56.581 "seek_hole": false, 00:06:56.581 "seek_data": false, 00:06:56.581 "copy": true, 00:06:56.581 "nvme_iov_md": false 00:06:56.581 }, 00:06:56.581 "memory_domains": [ 00:06:56.581 { 00:06:56.581 "dma_device_id": "system", 00:06:56.581 "dma_device_type": 1 00:06:56.581 }, 00:06:56.581 { 00:06:56.581 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:56.581 "dma_device_type": 2 00:06:56.581 } 00:06:56.581 ], 00:06:56.581 "driver_specific": {} 00:06:56.581 } 00:06:56.581 ]' 00:06:56.581 19:26:49 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:56.581 19:26:49 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:56.581 19:26:49 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:56.582 19:26:49 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.582 19:26:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:56.582 [2024-12-05 19:26:49.874415] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:56.582 [2024-12-05 19:26:49.874469] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:56.582 [2024-12-05 19:26:49.874487] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x15ffcb0 00:06:56.582 [2024-12-05 19:26:49.874497] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:56.582 [2024-12-05 19:26:49.876198] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:56.582 [2024-12-05 19:26:49.876252] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:56.582 Passthru0 00:06:56.582 19:26:49 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.582 19:26:49 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:56.582 19:26:49 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.582 19:26:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:56.582 19:26:49 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.582 19:26:49 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:56.582 { 00:06:56.582 "name": "Malloc0", 00:06:56.582 "aliases": [ 00:06:56.582 "d2a9bc4e-a8c8-4811-a4ee-a9636e87bde0" 00:06:56.582 ], 00:06:56.582 "product_name": "Malloc disk", 00:06:56.582 "block_size": 512, 00:06:56.582 "num_blocks": 16384, 00:06:56.582 "uuid": "d2a9bc4e-a8c8-4811-a4ee-a9636e87bde0", 00:06:56.582 "assigned_rate_limits": { 00:06:56.582 "rw_ios_per_sec": 0, 00:06:56.582 "rw_mbytes_per_sec": 0, 00:06:56.582 "r_mbytes_per_sec": 0, 00:06:56.582 "w_mbytes_per_sec": 0 00:06:56.582 }, 00:06:56.582 "claimed": true, 00:06:56.582 "claim_type": "exclusive_write", 00:06:56.582 "zoned": false, 00:06:56.582 "supported_io_types": { 00:06:56.582 "read": true, 00:06:56.582 "write": true, 00:06:56.582 "unmap": true, 00:06:56.582 "flush": true, 00:06:56.582 "reset": true, 00:06:56.582 "nvme_admin": false, 00:06:56.582 "nvme_io": false, 00:06:56.582 "nvme_io_md": false, 00:06:56.582 "write_zeroes": true, 00:06:56.582 "zcopy": true, 00:06:56.582 "get_zone_info": false, 00:06:56.582 "zone_management": false, 00:06:56.582 "zone_append": false, 00:06:56.582 "compare": false, 00:06:56.582 "compare_and_write": false, 00:06:56.582 "abort": true, 00:06:56.582 "seek_hole": false, 00:06:56.582 "seek_data": false, 00:06:56.582 "copy": true, 00:06:56.582 "nvme_iov_md": false 00:06:56.582 }, 00:06:56.582 "memory_domains": [ 00:06:56.582 { 00:06:56.582 "dma_device_id": "system", 00:06:56.582 "dma_device_type": 1 00:06:56.582 }, 00:06:56.582 { 00:06:56.582 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:56.582 "dma_device_type": 2 00:06:56.582 } 00:06:56.582 ], 00:06:56.582 "driver_specific": {} 00:06:56.582 }, 00:06:56.582 { 00:06:56.582 "name": "Passthru0", 00:06:56.582 "aliases": [ 00:06:56.582 "cb186303-ae34-5871-944c-82e7ad511eaf" 00:06:56.582 ], 00:06:56.582 "product_name": "passthru", 00:06:56.582 "block_size": 512, 00:06:56.582 "num_blocks": 16384, 00:06:56.582 "uuid": "cb186303-ae34-5871-944c-82e7ad511eaf", 00:06:56.582 "assigned_rate_limits": { 00:06:56.582 "rw_ios_per_sec": 0, 00:06:56.582 "rw_mbytes_per_sec": 0, 00:06:56.582 "r_mbytes_per_sec": 0, 00:06:56.582 "w_mbytes_per_sec": 0 00:06:56.582 }, 00:06:56.582 "claimed": false, 00:06:56.582 "zoned": false, 00:06:56.582 "supported_io_types": { 00:06:56.582 "read": true, 00:06:56.582 "write": true, 00:06:56.582 "unmap": true, 00:06:56.582 "flush": true, 00:06:56.582 "reset": true, 00:06:56.582 "nvme_admin": false, 00:06:56.582 "nvme_io": false, 00:06:56.582 "nvme_io_md": false, 00:06:56.582 "write_zeroes": true, 00:06:56.582 "zcopy": true, 00:06:56.582 "get_zone_info": false, 00:06:56.582 "zone_management": false, 00:06:56.582 "zone_append": false, 00:06:56.582 "compare": false, 00:06:56.582 "compare_and_write": false, 00:06:56.582 "abort": true, 00:06:56.582 "seek_hole": false, 00:06:56.582 "seek_data": false, 00:06:56.582 "copy": true, 00:06:56.582 "nvme_iov_md": false 00:06:56.582 }, 00:06:56.582 "memory_domains": [ 00:06:56.582 { 00:06:56.582 "dma_device_id": "system", 00:06:56.582 "dma_device_type": 1 00:06:56.582 }, 00:06:56.582 { 00:06:56.582 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:56.582 "dma_device_type": 2 00:06:56.582 } 00:06:56.582 ], 00:06:56.582 "driver_specific": { 00:06:56.582 "passthru": { 00:06:56.582 "name": "Passthru0", 00:06:56.582 "base_bdev_name": "Malloc0" 00:06:56.582 } 00:06:56.582 } 00:06:56.582 } 00:06:56.582 ]' 00:06:56.582 19:26:49 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:56.582 19:26:49 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:56.582 19:26:49 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:56.582 19:26:49 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.582 19:26:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:56.582 19:26:49 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.582 19:26:49 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:56.582 19:26:49 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.582 19:26:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:56.582 19:26:49 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.582 19:26:49 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:56.582 19:26:49 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.582 19:26:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:56.582 19:26:49 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.582 19:26:49 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:56.582 19:26:50 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:56.841 19:26:50 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:56.841 00:06:56.841 real 0m0.338s 00:06:56.841 user 0m0.223s 00:06:56.841 sys 0m0.043s 00:06:56.841 19:26:50 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:56.841 19:26:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:56.841 ************************************ 00:06:56.841 END TEST rpc_integrity 00:06:56.841 ************************************ 00:06:56.841 19:26:50 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:56.841 19:26:50 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:56.841 19:26:50 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:56.841 19:26:50 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:56.841 ************************************ 00:06:56.841 START TEST rpc_plugins 00:06:56.841 ************************************ 00:06:56.841 19:26:50 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:06:56.841 19:26:50 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:56.841 19:26:50 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.841 19:26:50 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:56.841 19:26:50 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.841 19:26:50 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:56.841 19:26:50 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:56.841 19:26:50 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.841 19:26:50 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:56.841 19:26:50 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.841 19:26:50 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:56.841 { 00:06:56.841 "name": "Malloc1", 00:06:56.841 "aliases": [ 00:06:56.841 "53ff84c2-f26c-4383-bf17-942f644f6209" 00:06:56.841 ], 00:06:56.841 "product_name": "Malloc disk", 00:06:56.841 "block_size": 4096, 00:06:56.841 "num_blocks": 256, 00:06:56.841 "uuid": "53ff84c2-f26c-4383-bf17-942f644f6209", 00:06:56.841 "assigned_rate_limits": { 00:06:56.841 "rw_ios_per_sec": 0, 00:06:56.841 "rw_mbytes_per_sec": 0, 00:06:56.841 "r_mbytes_per_sec": 0, 00:06:56.841 "w_mbytes_per_sec": 0 00:06:56.841 }, 00:06:56.841 "claimed": false, 00:06:56.841 "zoned": false, 00:06:56.841 "supported_io_types": { 00:06:56.841 "read": true, 00:06:56.841 "write": true, 00:06:56.841 "unmap": true, 00:06:56.841 "flush": true, 00:06:56.842 "reset": true, 00:06:56.842 "nvme_admin": false, 00:06:56.842 "nvme_io": false, 00:06:56.842 "nvme_io_md": false, 00:06:56.842 "write_zeroes": true, 00:06:56.842 "zcopy": true, 00:06:56.842 "get_zone_info": false, 00:06:56.842 "zone_management": false, 00:06:56.842 "zone_append": false, 00:06:56.842 "compare": false, 00:06:56.842 "compare_and_write": false, 00:06:56.842 "abort": true, 00:06:56.842 "seek_hole": false, 00:06:56.842 "seek_data": false, 00:06:56.842 "copy": true, 00:06:56.842 "nvme_iov_md": false 00:06:56.842 }, 00:06:56.842 "memory_domains": [ 00:06:56.842 { 00:06:56.842 "dma_device_id": "system", 00:06:56.842 "dma_device_type": 1 00:06:56.842 }, 00:06:56.842 { 00:06:56.842 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:56.842 "dma_device_type": 2 00:06:56.842 } 00:06:56.842 ], 00:06:56.842 "driver_specific": {} 00:06:56.842 } 00:06:56.842 ]' 00:06:56.842 19:26:50 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:56.842 19:26:50 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:56.842 19:26:50 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:56.842 19:26:50 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.842 19:26:50 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:56.842 19:26:50 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.842 19:26:50 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:56.842 19:26:50 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.842 19:26:50 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:56.842 19:26:50 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.842 19:26:50 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:56.842 19:26:50 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:56.842 ************************************ 00:06:56.842 END TEST rpc_plugins 00:06:56.842 ************************************ 00:06:56.842 19:26:50 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:56.842 00:06:56.842 real 0m0.166s 00:06:56.842 user 0m0.105s 00:06:56.842 sys 0m0.023s 00:06:56.842 19:26:50 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:56.842 19:26:50 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:57.101 19:26:50 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:57.101 19:26:50 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:57.101 19:26:50 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:57.101 19:26:50 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:57.101 ************************************ 00:06:57.101 START TEST rpc_trace_cmd_test 00:06:57.101 ************************************ 00:06:57.101 19:26:50 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:06:57.101 19:26:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:57.101 19:26:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:57.101 19:26:50 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.101 19:26:50 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.101 19:26:50 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.101 19:26:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:57.101 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56712", 00:06:57.101 "tpoint_group_mask": "0x8", 00:06:57.101 "iscsi_conn": { 00:06:57.101 "mask": "0x2", 00:06:57.101 "tpoint_mask": "0x0" 00:06:57.101 }, 00:06:57.101 "scsi": { 00:06:57.101 "mask": "0x4", 00:06:57.101 "tpoint_mask": "0x0" 00:06:57.101 }, 00:06:57.101 "bdev": { 00:06:57.101 "mask": "0x8", 00:06:57.101 "tpoint_mask": "0xffffffffffffffff" 00:06:57.101 }, 00:06:57.101 "nvmf_rdma": { 00:06:57.101 "mask": "0x10", 00:06:57.101 "tpoint_mask": "0x0" 00:06:57.101 }, 00:06:57.101 "nvmf_tcp": { 00:06:57.101 "mask": "0x20", 00:06:57.101 "tpoint_mask": "0x0" 00:06:57.101 }, 00:06:57.101 "ftl": { 00:06:57.101 "mask": "0x40", 00:06:57.101 "tpoint_mask": "0x0" 00:06:57.101 }, 00:06:57.101 "blobfs": { 00:06:57.101 "mask": "0x80", 00:06:57.101 "tpoint_mask": "0x0" 00:06:57.101 }, 00:06:57.101 "dsa": { 00:06:57.101 "mask": "0x200", 00:06:57.101 "tpoint_mask": "0x0" 00:06:57.101 }, 00:06:57.101 "thread": { 00:06:57.101 "mask": "0x400", 00:06:57.101 "tpoint_mask": "0x0" 00:06:57.101 }, 00:06:57.101 "nvme_pcie": { 00:06:57.101 "mask": "0x800", 00:06:57.101 "tpoint_mask": "0x0" 00:06:57.101 }, 00:06:57.101 "iaa": { 00:06:57.101 "mask": "0x1000", 00:06:57.101 "tpoint_mask": "0x0" 00:06:57.101 }, 00:06:57.101 "nvme_tcp": { 00:06:57.101 "mask": "0x2000", 00:06:57.101 "tpoint_mask": "0x0" 00:06:57.101 }, 00:06:57.101 "bdev_nvme": { 00:06:57.101 "mask": "0x4000", 00:06:57.101 "tpoint_mask": "0x0" 00:06:57.101 }, 00:06:57.101 "sock": { 00:06:57.101 "mask": "0x8000", 00:06:57.101 "tpoint_mask": "0x0" 00:06:57.101 }, 00:06:57.101 "blob": { 00:06:57.101 "mask": "0x10000", 00:06:57.101 "tpoint_mask": "0x0" 00:06:57.101 }, 00:06:57.101 "bdev_raid": { 00:06:57.101 "mask": "0x20000", 00:06:57.101 "tpoint_mask": "0x0" 00:06:57.101 }, 00:06:57.101 "scheduler": { 00:06:57.101 "mask": "0x40000", 00:06:57.101 "tpoint_mask": "0x0" 00:06:57.101 } 00:06:57.101 }' 00:06:57.101 19:26:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:57.101 19:26:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:06:57.101 19:26:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:57.101 19:26:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:57.101 19:26:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:57.101 19:26:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:57.101 19:26:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:57.360 19:26:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:57.360 19:26:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:57.360 ************************************ 00:06:57.360 END TEST rpc_trace_cmd_test 00:06:57.360 ************************************ 00:06:57.360 19:26:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:57.360 00:06:57.360 real 0m0.279s 00:06:57.360 user 0m0.240s 00:06:57.360 sys 0m0.027s 00:06:57.360 19:26:50 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:57.360 19:26:50 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.360 19:26:50 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:57.360 19:26:50 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:57.360 19:26:50 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:57.360 19:26:50 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:57.360 19:26:50 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:57.360 19:26:50 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:57.360 ************************************ 00:06:57.360 START TEST rpc_daemon_integrity 00:06:57.360 ************************************ 00:06:57.360 19:26:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:06:57.360 19:26:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:57.360 19:26:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.360 19:26:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:57.360 19:26:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.360 19:26:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:57.360 19:26:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:57.360 19:26:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:57.360 19:26:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:57.360 19:26:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.360 19:26:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:57.360 19:26:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.360 19:26:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:57.360 19:26:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:57.360 19:26:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.360 19:26:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:57.360 19:26:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.360 19:26:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:57.360 { 00:06:57.360 "name": "Malloc2", 00:06:57.360 "aliases": [ 00:06:57.360 "fc421423-d4c4-41f0-82a6-f6b742587230" 00:06:57.360 ], 00:06:57.360 "product_name": "Malloc disk", 00:06:57.360 "block_size": 512, 00:06:57.360 "num_blocks": 16384, 00:06:57.360 "uuid": "fc421423-d4c4-41f0-82a6-f6b742587230", 00:06:57.360 "assigned_rate_limits": { 00:06:57.360 "rw_ios_per_sec": 0, 00:06:57.360 "rw_mbytes_per_sec": 0, 00:06:57.360 "r_mbytes_per_sec": 0, 00:06:57.360 "w_mbytes_per_sec": 0 00:06:57.360 }, 00:06:57.360 "claimed": false, 00:06:57.360 "zoned": false, 00:06:57.360 "supported_io_types": { 00:06:57.360 "read": true, 00:06:57.360 "write": true, 00:06:57.360 "unmap": true, 00:06:57.360 "flush": true, 00:06:57.360 "reset": true, 00:06:57.360 "nvme_admin": false, 00:06:57.360 "nvme_io": false, 00:06:57.360 "nvme_io_md": false, 00:06:57.360 "write_zeroes": true, 00:06:57.360 "zcopy": true, 00:06:57.360 "get_zone_info": false, 00:06:57.360 "zone_management": false, 00:06:57.360 "zone_append": false, 00:06:57.360 "compare": false, 00:06:57.360 "compare_and_write": false, 00:06:57.360 "abort": true, 00:06:57.360 "seek_hole": false, 00:06:57.360 "seek_data": false, 00:06:57.360 "copy": true, 00:06:57.360 "nvme_iov_md": false 00:06:57.360 }, 00:06:57.360 "memory_domains": [ 00:06:57.360 { 00:06:57.360 "dma_device_id": "system", 00:06:57.360 "dma_device_type": 1 00:06:57.360 }, 00:06:57.360 { 00:06:57.360 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:57.360 "dma_device_type": 2 00:06:57.360 } 00:06:57.360 ], 00:06:57.360 "driver_specific": {} 00:06:57.360 } 00:06:57.360 ]' 00:06:57.360 19:26:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:57.620 19:26:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:57.620 19:26:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:57.620 19:26:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.620 19:26:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:57.620 [2024-12-05 19:26:50.811663] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:57.620 [2024-12-05 19:26:50.811916] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:57.620 [2024-12-05 19:26:50.811977] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1664880 00:06:57.620 [2024-12-05 19:26:50.811999] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:57.620 [2024-12-05 19:26:50.813917] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:57.620 [2024-12-05 19:26:50.813954] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:57.620 Passthru0 00:06:57.620 19:26:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.620 19:26:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:57.620 19:26:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.620 19:26:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:57.620 19:26:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.620 19:26:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:57.620 { 00:06:57.620 "name": "Malloc2", 00:06:57.620 "aliases": [ 00:06:57.620 "fc421423-d4c4-41f0-82a6-f6b742587230" 00:06:57.620 ], 00:06:57.620 "product_name": "Malloc disk", 00:06:57.620 "block_size": 512, 00:06:57.620 "num_blocks": 16384, 00:06:57.620 "uuid": "fc421423-d4c4-41f0-82a6-f6b742587230", 00:06:57.620 "assigned_rate_limits": { 00:06:57.620 "rw_ios_per_sec": 0, 00:06:57.620 "rw_mbytes_per_sec": 0, 00:06:57.620 "r_mbytes_per_sec": 0, 00:06:57.620 "w_mbytes_per_sec": 0 00:06:57.620 }, 00:06:57.620 "claimed": true, 00:06:57.620 "claim_type": "exclusive_write", 00:06:57.620 "zoned": false, 00:06:57.620 "supported_io_types": { 00:06:57.620 "read": true, 00:06:57.620 "write": true, 00:06:57.620 "unmap": true, 00:06:57.620 "flush": true, 00:06:57.620 "reset": true, 00:06:57.620 "nvme_admin": false, 00:06:57.620 "nvme_io": false, 00:06:57.620 "nvme_io_md": false, 00:06:57.620 "write_zeroes": true, 00:06:57.620 "zcopy": true, 00:06:57.620 "get_zone_info": false, 00:06:57.620 "zone_management": false, 00:06:57.620 "zone_append": false, 00:06:57.620 "compare": false, 00:06:57.620 "compare_and_write": false, 00:06:57.620 "abort": true, 00:06:57.620 "seek_hole": false, 00:06:57.620 "seek_data": false, 00:06:57.620 "copy": true, 00:06:57.620 "nvme_iov_md": false 00:06:57.620 }, 00:06:57.620 "memory_domains": [ 00:06:57.620 { 00:06:57.620 "dma_device_id": "system", 00:06:57.620 "dma_device_type": 1 00:06:57.620 }, 00:06:57.620 { 00:06:57.620 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:57.620 "dma_device_type": 2 00:06:57.620 } 00:06:57.620 ], 00:06:57.620 "driver_specific": {} 00:06:57.620 }, 00:06:57.620 { 00:06:57.620 "name": "Passthru0", 00:06:57.620 "aliases": [ 00:06:57.620 "e65bf490-271c-5e30-8f5f-299ae8c4f97c" 00:06:57.620 ], 00:06:57.620 "product_name": "passthru", 00:06:57.620 "block_size": 512, 00:06:57.620 "num_blocks": 16384, 00:06:57.620 "uuid": "e65bf490-271c-5e30-8f5f-299ae8c4f97c", 00:06:57.620 "assigned_rate_limits": { 00:06:57.620 "rw_ios_per_sec": 0, 00:06:57.620 "rw_mbytes_per_sec": 0, 00:06:57.620 "r_mbytes_per_sec": 0, 00:06:57.620 "w_mbytes_per_sec": 0 00:06:57.620 }, 00:06:57.620 "claimed": false, 00:06:57.620 "zoned": false, 00:06:57.620 "supported_io_types": { 00:06:57.620 "read": true, 00:06:57.620 "write": true, 00:06:57.620 "unmap": true, 00:06:57.620 "flush": true, 00:06:57.620 "reset": true, 00:06:57.620 "nvme_admin": false, 00:06:57.620 "nvme_io": false, 00:06:57.620 "nvme_io_md": false, 00:06:57.620 "write_zeroes": true, 00:06:57.620 "zcopy": true, 00:06:57.620 "get_zone_info": false, 00:06:57.620 "zone_management": false, 00:06:57.620 "zone_append": false, 00:06:57.620 "compare": false, 00:06:57.620 "compare_and_write": false, 00:06:57.620 "abort": true, 00:06:57.620 "seek_hole": false, 00:06:57.620 "seek_data": false, 00:06:57.620 "copy": true, 00:06:57.620 "nvme_iov_md": false 00:06:57.620 }, 00:06:57.620 "memory_domains": [ 00:06:57.620 { 00:06:57.620 "dma_device_id": "system", 00:06:57.620 "dma_device_type": 1 00:06:57.620 }, 00:06:57.620 { 00:06:57.620 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:57.620 "dma_device_type": 2 00:06:57.620 } 00:06:57.620 ], 00:06:57.620 "driver_specific": { 00:06:57.620 "passthru": { 00:06:57.620 "name": "Passthru0", 00:06:57.620 "base_bdev_name": "Malloc2" 00:06:57.620 } 00:06:57.620 } 00:06:57.620 } 00:06:57.620 ]' 00:06:57.620 19:26:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:57.620 19:26:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:57.620 19:26:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:57.620 19:26:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.620 19:26:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:57.620 19:26:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.620 19:26:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:57.620 19:26:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.620 19:26:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:57.620 19:26:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.620 19:26:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:57.620 19:26:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.620 19:26:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:57.620 19:26:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.620 19:26:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:57.620 19:26:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:57.620 ************************************ 00:06:57.620 END TEST rpc_daemon_integrity 00:06:57.620 ************************************ 00:06:57.620 19:26:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:57.620 00:06:57.620 real 0m0.336s 00:06:57.620 user 0m0.218s 00:06:57.620 sys 0m0.050s 00:06:57.620 19:26:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:57.620 19:26:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:57.621 19:26:51 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:57.621 19:26:51 rpc -- rpc/rpc.sh@84 -- # killprocess 56712 00:06:57.621 19:26:51 rpc -- common/autotest_common.sh@954 -- # '[' -z 56712 ']' 00:06:57.621 19:26:51 rpc -- common/autotest_common.sh@958 -- # kill -0 56712 00:06:57.621 19:26:51 rpc -- common/autotest_common.sh@959 -- # uname 00:06:57.621 19:26:51 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:57.621 19:26:51 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56712 00:06:57.879 killing process with pid 56712 00:06:57.879 19:26:51 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:57.879 19:26:51 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:57.879 19:26:51 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56712' 00:06:57.879 19:26:51 rpc -- common/autotest_common.sh@973 -- # kill 56712 00:06:57.879 19:26:51 rpc -- common/autotest_common.sh@978 -- # wait 56712 00:06:58.138 00:06:58.138 real 0m2.512s 00:06:58.138 user 0m3.181s 00:06:58.138 sys 0m0.687s 00:06:58.138 19:26:51 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:58.138 19:26:51 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:58.138 ************************************ 00:06:58.138 END TEST rpc 00:06:58.138 ************************************ 00:06:58.138 19:26:51 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:58.138 19:26:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:58.138 19:26:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:58.138 19:26:51 -- common/autotest_common.sh@10 -- # set +x 00:06:58.138 ************************************ 00:06:58.138 START TEST skip_rpc 00:06:58.138 ************************************ 00:06:58.138 19:26:51 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:58.397 * Looking for test storage... 00:06:58.397 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:58.397 19:26:51 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:58.397 19:26:51 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:06:58.397 19:26:51 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:58.397 19:26:51 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:58.397 19:26:51 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:58.397 19:26:51 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:58.397 19:26:51 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:58.397 19:26:51 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:58.397 19:26:51 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:58.397 19:26:51 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:58.397 19:26:51 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:58.397 19:26:51 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:58.397 19:26:51 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:58.397 19:26:51 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:58.397 19:26:51 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:58.397 19:26:51 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:58.397 19:26:51 skip_rpc -- scripts/common.sh@345 -- # : 1 00:06:58.397 19:26:51 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:58.397 19:26:51 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:58.397 19:26:51 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:58.397 19:26:51 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:06:58.397 19:26:51 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:58.397 19:26:51 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:06:58.397 19:26:51 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:58.397 19:26:51 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:58.397 19:26:51 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:06:58.397 19:26:51 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:58.397 19:26:51 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:06:58.397 19:26:51 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:58.397 19:26:51 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:58.397 19:26:51 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:58.397 19:26:51 skip_rpc -- scripts/common.sh@368 -- # return 0 00:06:58.397 19:26:51 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:58.397 19:26:51 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:58.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.397 --rc genhtml_branch_coverage=1 00:06:58.397 --rc genhtml_function_coverage=1 00:06:58.397 --rc genhtml_legend=1 00:06:58.397 --rc geninfo_all_blocks=1 00:06:58.397 --rc geninfo_unexecuted_blocks=1 00:06:58.397 00:06:58.397 ' 00:06:58.397 19:26:51 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:58.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.397 --rc genhtml_branch_coverage=1 00:06:58.397 --rc genhtml_function_coverage=1 00:06:58.398 --rc genhtml_legend=1 00:06:58.398 --rc geninfo_all_blocks=1 00:06:58.398 --rc geninfo_unexecuted_blocks=1 00:06:58.398 00:06:58.398 ' 00:06:58.398 19:26:51 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:58.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.398 --rc genhtml_branch_coverage=1 00:06:58.398 --rc genhtml_function_coverage=1 00:06:58.398 --rc genhtml_legend=1 00:06:58.398 --rc geninfo_all_blocks=1 00:06:58.398 --rc geninfo_unexecuted_blocks=1 00:06:58.398 00:06:58.398 ' 00:06:58.398 19:26:51 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:58.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.398 --rc genhtml_branch_coverage=1 00:06:58.398 --rc genhtml_function_coverage=1 00:06:58.398 --rc genhtml_legend=1 00:06:58.398 --rc geninfo_all_blocks=1 00:06:58.398 --rc geninfo_unexecuted_blocks=1 00:06:58.398 00:06:58.398 ' 00:06:58.398 19:26:51 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:58.398 19:26:51 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:58.398 19:26:51 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:58.398 19:26:51 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:58.398 19:26:51 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:58.398 19:26:51 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:58.398 ************************************ 00:06:58.398 START TEST skip_rpc 00:06:58.398 ************************************ 00:06:58.398 19:26:51 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:06:58.398 19:26:51 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=56910 00:06:58.398 19:26:51 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:58.398 19:26:51 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:58.398 19:26:51 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:58.398 [2024-12-05 19:26:51.801794] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:06:58.398 [2024-12-05 19:26:51.802174] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56910 ] 00:06:58.657 [2024-12-05 19:26:51.945729] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.657 [2024-12-05 19:26:52.009477] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.657 [2024-12-05 19:26:52.082536] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:04.003 19:26:56 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:07:04.003 19:26:56 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:07:04.003 19:26:56 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:07:04.003 19:26:56 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:04.003 19:26:56 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:04.003 19:26:56 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:04.003 19:26:56 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:04.003 19:26:56 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:07:04.003 19:26:56 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.003 19:26:56 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:04.003 19:26:56 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:04.003 19:26:56 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:07:04.003 19:26:56 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:04.003 19:26:56 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:04.003 19:26:56 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:04.003 19:26:56 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:07:04.003 19:26:56 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 56910 00:07:04.003 19:26:56 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 56910 ']' 00:07:04.003 19:26:56 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 56910 00:07:04.003 19:26:56 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:07:04.003 19:26:56 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:04.003 19:26:56 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56910 00:07:04.003 19:26:56 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:04.003 19:26:56 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:04.003 19:26:56 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56910' 00:07:04.003 killing process with pid 56910 00:07:04.003 19:26:56 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 56910 00:07:04.003 19:26:56 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 56910 00:07:04.003 00:07:04.003 real 0m5.447s 00:07:04.003 user 0m5.047s 00:07:04.003 sys 0m0.315s 00:07:04.003 19:26:57 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:04.003 ************************************ 00:07:04.003 END TEST skip_rpc 00:07:04.003 ************************************ 00:07:04.003 19:26:57 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:04.003 19:26:57 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:07:04.003 19:26:57 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:04.003 19:26:57 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:04.003 19:26:57 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:04.003 ************************************ 00:07:04.003 START TEST skip_rpc_with_json 00:07:04.003 ************************************ 00:07:04.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:04.003 19:26:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:07:04.003 19:26:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:07:04.003 19:26:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=56997 00:07:04.003 19:26:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:04.003 19:26:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:04.003 19:26:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 56997 00:07:04.003 19:26:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 56997 ']' 00:07:04.003 19:26:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:04.003 19:26:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:04.003 19:26:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:04.004 19:26:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:04.004 19:26:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:04.004 [2024-12-05 19:26:57.295145] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:07:04.004 [2024-12-05 19:26:57.295573] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56997 ] 00:07:04.263 [2024-12-05 19:26:57.450833] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.263 [2024-12-05 19:26:57.510216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.263 [2024-12-05 19:26:57.591467] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:04.522 19:26:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:04.522 19:26:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:07:04.522 19:26:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:07:04.522 19:26:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.522 19:26:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:04.522 [2024-12-05 19:26:57.799910] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:07:04.522 request: 00:07:04.522 { 00:07:04.522 "trtype": "tcp", 00:07:04.522 "method": "nvmf_get_transports", 00:07:04.522 "req_id": 1 00:07:04.522 } 00:07:04.522 Got JSON-RPC error response 00:07:04.522 response: 00:07:04.522 { 00:07:04.522 "code": -19, 00:07:04.522 "message": "No such device" 00:07:04.522 } 00:07:04.522 19:26:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:04.522 19:26:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:07:04.522 19:26:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.522 19:26:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:04.522 [2024-12-05 19:26:57.808507] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:04.522 19:26:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.522 19:26:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:07:04.522 19:26:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.522 19:26:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:04.781 19:26:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.781 19:26:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:07:04.781 { 00:07:04.781 "subsystems": [ 00:07:04.781 { 00:07:04.781 "subsystem": "fsdev", 00:07:04.781 "config": [ 00:07:04.781 { 00:07:04.781 "method": "fsdev_set_opts", 00:07:04.781 "params": { 00:07:04.781 "fsdev_io_pool_size": 65535, 00:07:04.781 "fsdev_io_cache_size": 256 00:07:04.781 } 00:07:04.781 } 00:07:04.781 ] 00:07:04.781 }, 00:07:04.781 { 00:07:04.781 "subsystem": "keyring", 00:07:04.781 "config": [] 00:07:04.781 }, 00:07:04.782 { 00:07:04.782 "subsystem": "iobuf", 00:07:04.782 "config": [ 00:07:04.782 { 00:07:04.782 "method": "iobuf_set_options", 00:07:04.782 "params": { 00:07:04.782 "small_pool_count": 8192, 00:07:04.782 "large_pool_count": 1024, 00:07:04.782 "small_bufsize": 8192, 00:07:04.782 "large_bufsize": 135168, 00:07:04.782 "enable_numa": false 00:07:04.782 } 00:07:04.782 } 00:07:04.782 ] 00:07:04.782 }, 00:07:04.782 { 00:07:04.782 "subsystem": "sock", 00:07:04.782 "config": [ 00:07:04.782 { 00:07:04.782 "method": "sock_set_default_impl", 00:07:04.782 "params": { 00:07:04.782 "impl_name": "uring" 00:07:04.782 } 00:07:04.782 }, 00:07:04.782 { 00:07:04.782 "method": "sock_impl_set_options", 00:07:04.782 "params": { 00:07:04.782 "impl_name": "ssl", 00:07:04.782 "recv_buf_size": 4096, 00:07:04.782 "send_buf_size": 4096, 00:07:04.782 "enable_recv_pipe": true, 00:07:04.782 "enable_quickack": false, 00:07:04.782 "enable_placement_id": 0, 00:07:04.782 "enable_zerocopy_send_server": true, 00:07:04.782 "enable_zerocopy_send_client": false, 00:07:04.782 "zerocopy_threshold": 0, 00:07:04.782 "tls_version": 0, 00:07:04.782 "enable_ktls": false 00:07:04.782 } 00:07:04.782 }, 00:07:04.782 { 00:07:04.782 "method": "sock_impl_set_options", 00:07:04.782 "params": { 00:07:04.782 "impl_name": "posix", 00:07:04.782 "recv_buf_size": 2097152, 00:07:04.782 "send_buf_size": 2097152, 00:07:04.782 "enable_recv_pipe": true, 00:07:04.782 "enable_quickack": false, 00:07:04.782 "enable_placement_id": 0, 00:07:04.782 "enable_zerocopy_send_server": true, 00:07:04.782 "enable_zerocopy_send_client": false, 00:07:04.782 "zerocopy_threshold": 0, 00:07:04.782 "tls_version": 0, 00:07:04.782 "enable_ktls": false 00:07:04.782 } 00:07:04.782 }, 00:07:04.782 { 00:07:04.782 "method": "sock_impl_set_options", 00:07:04.782 "params": { 00:07:04.782 "impl_name": "uring", 00:07:04.782 "recv_buf_size": 2097152, 00:07:04.782 "send_buf_size": 2097152, 00:07:04.782 "enable_recv_pipe": true, 00:07:04.782 "enable_quickack": false, 00:07:04.782 "enable_placement_id": 0, 00:07:04.782 "enable_zerocopy_send_server": false, 00:07:04.782 "enable_zerocopy_send_client": false, 00:07:04.782 "zerocopy_threshold": 0, 00:07:04.782 "tls_version": 0, 00:07:04.782 "enable_ktls": false 00:07:04.782 } 00:07:04.782 } 00:07:04.782 ] 00:07:04.782 }, 00:07:04.782 { 00:07:04.782 "subsystem": "vmd", 00:07:04.782 "config": [] 00:07:04.782 }, 00:07:04.782 { 00:07:04.782 "subsystem": "accel", 00:07:04.782 "config": [ 00:07:04.782 { 00:07:04.782 "method": "accel_set_options", 00:07:04.782 "params": { 00:07:04.782 "small_cache_size": 128, 00:07:04.782 "large_cache_size": 16, 00:07:04.782 "task_count": 2048, 00:07:04.782 "sequence_count": 2048, 00:07:04.782 "buf_count": 2048 00:07:04.782 } 00:07:04.782 } 00:07:04.782 ] 00:07:04.782 }, 00:07:04.782 { 00:07:04.782 "subsystem": "bdev", 00:07:04.782 "config": [ 00:07:04.782 { 00:07:04.782 "method": "bdev_set_options", 00:07:04.782 "params": { 00:07:04.782 "bdev_io_pool_size": 65535, 00:07:04.782 "bdev_io_cache_size": 256, 00:07:04.782 "bdev_auto_examine": true, 00:07:04.782 "iobuf_small_cache_size": 128, 00:07:04.782 "iobuf_large_cache_size": 16 00:07:04.782 } 00:07:04.782 }, 00:07:04.782 { 00:07:04.782 "method": "bdev_raid_set_options", 00:07:04.782 "params": { 00:07:04.782 "process_window_size_kb": 1024, 00:07:04.782 "process_max_bandwidth_mb_sec": 0 00:07:04.782 } 00:07:04.782 }, 00:07:04.782 { 00:07:04.782 "method": "bdev_iscsi_set_options", 00:07:04.782 "params": { 00:07:04.782 "timeout_sec": 30 00:07:04.782 } 00:07:04.782 }, 00:07:04.782 { 00:07:04.782 "method": "bdev_nvme_set_options", 00:07:04.782 "params": { 00:07:04.782 "action_on_timeout": "none", 00:07:04.782 "timeout_us": 0, 00:07:04.782 "timeout_admin_us": 0, 00:07:04.782 "keep_alive_timeout_ms": 10000, 00:07:04.782 "arbitration_burst": 0, 00:07:04.782 "low_priority_weight": 0, 00:07:04.782 "medium_priority_weight": 0, 00:07:04.782 "high_priority_weight": 0, 00:07:04.782 "nvme_adminq_poll_period_us": 10000, 00:07:04.782 "nvme_ioq_poll_period_us": 0, 00:07:04.782 "io_queue_requests": 0, 00:07:04.782 "delay_cmd_submit": true, 00:07:04.782 "transport_retry_count": 4, 00:07:04.782 "bdev_retry_count": 3, 00:07:04.782 "transport_ack_timeout": 0, 00:07:04.782 "ctrlr_loss_timeout_sec": 0, 00:07:04.782 "reconnect_delay_sec": 0, 00:07:04.782 "fast_io_fail_timeout_sec": 0, 00:07:04.782 "disable_auto_failback": false, 00:07:04.782 "generate_uuids": false, 00:07:04.782 "transport_tos": 0, 00:07:04.782 "nvme_error_stat": false, 00:07:04.782 "rdma_srq_size": 0, 00:07:04.782 "io_path_stat": false, 00:07:04.782 "allow_accel_sequence": false, 00:07:04.782 "rdma_max_cq_size": 0, 00:07:04.782 "rdma_cm_event_timeout_ms": 0, 00:07:04.782 "dhchap_digests": [ 00:07:04.782 "sha256", 00:07:04.782 "sha384", 00:07:04.782 "sha512" 00:07:04.782 ], 00:07:04.782 "dhchap_dhgroups": [ 00:07:04.782 "null", 00:07:04.782 "ffdhe2048", 00:07:04.782 "ffdhe3072", 00:07:04.782 "ffdhe4096", 00:07:04.782 "ffdhe6144", 00:07:04.782 "ffdhe8192" 00:07:04.782 ] 00:07:04.782 } 00:07:04.782 }, 00:07:04.782 { 00:07:04.782 "method": "bdev_nvme_set_hotplug", 00:07:04.782 "params": { 00:07:04.782 "period_us": 100000, 00:07:04.782 "enable": false 00:07:04.782 } 00:07:04.782 }, 00:07:04.782 { 00:07:04.782 "method": "bdev_wait_for_examine" 00:07:04.782 } 00:07:04.782 ] 00:07:04.782 }, 00:07:04.782 { 00:07:04.782 "subsystem": "scsi", 00:07:04.782 "config": null 00:07:04.782 }, 00:07:04.782 { 00:07:04.782 "subsystem": "scheduler", 00:07:04.782 "config": [ 00:07:04.782 { 00:07:04.782 "method": "framework_set_scheduler", 00:07:04.782 "params": { 00:07:04.782 "name": "static" 00:07:04.782 } 00:07:04.782 } 00:07:04.782 ] 00:07:04.782 }, 00:07:04.782 { 00:07:04.782 "subsystem": "vhost_scsi", 00:07:04.782 "config": [] 00:07:04.782 }, 00:07:04.782 { 00:07:04.782 "subsystem": "vhost_blk", 00:07:04.782 "config": [] 00:07:04.782 }, 00:07:04.782 { 00:07:04.782 "subsystem": "ublk", 00:07:04.782 "config": [] 00:07:04.782 }, 00:07:04.782 { 00:07:04.782 "subsystem": "nbd", 00:07:04.782 "config": [] 00:07:04.782 }, 00:07:04.782 { 00:07:04.782 "subsystem": "nvmf", 00:07:04.782 "config": [ 00:07:04.782 { 00:07:04.782 "method": "nvmf_set_config", 00:07:04.783 "params": { 00:07:04.783 "discovery_filter": "match_any", 00:07:04.783 "admin_cmd_passthru": { 00:07:04.783 "identify_ctrlr": false 00:07:04.783 }, 00:07:04.783 "dhchap_digests": [ 00:07:04.783 "sha256", 00:07:04.783 "sha384", 00:07:04.783 "sha512" 00:07:04.783 ], 00:07:04.783 "dhchap_dhgroups": [ 00:07:04.783 "null", 00:07:04.783 "ffdhe2048", 00:07:04.783 "ffdhe3072", 00:07:04.783 "ffdhe4096", 00:07:04.783 "ffdhe6144", 00:07:04.783 "ffdhe8192" 00:07:04.783 ] 00:07:04.783 } 00:07:04.783 }, 00:07:04.783 { 00:07:04.783 "method": "nvmf_set_max_subsystems", 00:07:04.783 "params": { 00:07:04.783 "max_subsystems": 1024 00:07:04.783 } 00:07:04.783 }, 00:07:04.783 { 00:07:04.783 "method": "nvmf_set_crdt", 00:07:04.783 "params": { 00:07:04.783 "crdt1": 0, 00:07:04.783 "crdt2": 0, 00:07:04.783 "crdt3": 0 00:07:04.783 } 00:07:04.783 }, 00:07:04.783 { 00:07:04.783 "method": "nvmf_create_transport", 00:07:04.783 "params": { 00:07:04.783 "trtype": "TCP", 00:07:04.783 "max_queue_depth": 128, 00:07:04.783 "max_io_qpairs_per_ctrlr": 127, 00:07:04.783 "in_capsule_data_size": 4096, 00:07:04.783 "max_io_size": 131072, 00:07:04.783 "io_unit_size": 131072, 00:07:04.783 "max_aq_depth": 128, 00:07:04.783 "num_shared_buffers": 511, 00:07:04.783 "buf_cache_size": 4294967295, 00:07:04.783 "dif_insert_or_strip": false, 00:07:04.783 "zcopy": false, 00:07:04.783 "c2h_success": true, 00:07:04.783 "sock_priority": 0, 00:07:04.783 "abort_timeout_sec": 1, 00:07:04.783 "ack_timeout": 0, 00:07:04.783 "data_wr_pool_size": 0 00:07:04.783 } 00:07:04.783 } 00:07:04.783 ] 00:07:04.783 }, 00:07:04.783 { 00:07:04.783 "subsystem": "iscsi", 00:07:04.783 "config": [ 00:07:04.783 { 00:07:04.783 "method": "iscsi_set_options", 00:07:04.783 "params": { 00:07:04.783 "node_base": "iqn.2016-06.io.spdk", 00:07:04.783 "max_sessions": 128, 00:07:04.783 "max_connections_per_session": 2, 00:07:04.783 "max_queue_depth": 64, 00:07:04.783 "default_time2wait": 2, 00:07:04.783 "default_time2retain": 20, 00:07:04.783 "first_burst_length": 8192, 00:07:04.783 "immediate_data": true, 00:07:04.783 "allow_duplicated_isid": false, 00:07:04.783 "error_recovery_level": 0, 00:07:04.783 "nop_timeout": 60, 00:07:04.783 "nop_in_interval": 30, 00:07:04.783 "disable_chap": false, 00:07:04.783 "require_chap": false, 00:07:04.783 "mutual_chap": false, 00:07:04.783 "chap_group": 0, 00:07:04.783 "max_large_datain_per_connection": 64, 00:07:04.783 "max_r2t_per_connection": 4, 00:07:04.783 "pdu_pool_size": 36864, 00:07:04.783 "immediate_data_pool_size": 16384, 00:07:04.783 "data_out_pool_size": 2048 00:07:04.783 } 00:07:04.783 } 00:07:04.783 ] 00:07:04.783 } 00:07:04.783 ] 00:07:04.783 } 00:07:04.783 19:26:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:07:04.783 19:26:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 56997 00:07:04.783 19:26:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 56997 ']' 00:07:04.783 19:26:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 56997 00:07:04.783 19:26:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:07:04.783 19:26:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:04.783 19:26:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56997 00:07:04.783 killing process with pid 56997 00:07:04.783 19:26:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:04.783 19:26:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:04.783 19:26:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56997' 00:07:04.783 19:26:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 56997 00:07:04.783 19:26:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 56997 00:07:05.041 19:26:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:07:05.041 19:26:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57017 00:07:05.041 19:26:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:07:10.314 19:27:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57017 00:07:10.315 19:27:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57017 ']' 00:07:10.315 19:27:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57017 00:07:10.315 19:27:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:07:10.315 19:27:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:10.315 19:27:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57017 00:07:10.315 killing process with pid 57017 00:07:10.315 19:27:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:10.315 19:27:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:10.315 19:27:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57017' 00:07:10.315 19:27:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57017 00:07:10.315 19:27:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57017 00:07:10.574 19:27:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:07:10.574 19:27:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:07:10.574 ************************************ 00:07:10.574 END TEST skip_rpc_with_json 00:07:10.574 ************************************ 00:07:10.574 00:07:10.574 real 0m6.631s 00:07:10.574 user 0m6.144s 00:07:10.574 sys 0m0.679s 00:07:10.574 19:27:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:10.574 19:27:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:10.574 19:27:03 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:07:10.574 19:27:03 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:10.574 19:27:03 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:10.574 19:27:03 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.574 ************************************ 00:07:10.574 START TEST skip_rpc_with_delay 00:07:10.574 ************************************ 00:07:10.574 19:27:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:07:10.574 19:27:03 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:10.574 19:27:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:07:10.574 19:27:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:10.574 19:27:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:10.574 19:27:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:10.574 19:27:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:10.574 19:27:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:10.574 19:27:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:10.574 19:27:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:10.574 19:27:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:10.574 19:27:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:07:10.574 19:27:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:10.574 [2024-12-05 19:27:03.954611] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:07:10.574 19:27:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:07:10.574 19:27:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:10.574 19:27:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:10.574 ************************************ 00:07:10.574 END TEST skip_rpc_with_delay 00:07:10.574 ************************************ 00:07:10.574 19:27:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:10.574 00:07:10.574 real 0m0.071s 00:07:10.574 user 0m0.041s 00:07:10.574 sys 0m0.029s 00:07:10.574 19:27:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:10.574 19:27:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:07:10.574 19:27:04 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:07:10.834 19:27:04 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:07:10.834 19:27:04 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:07:10.834 19:27:04 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:10.834 19:27:04 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:10.834 19:27:04 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.834 ************************************ 00:07:10.834 START TEST exit_on_failed_rpc_init 00:07:10.834 ************************************ 00:07:10.834 19:27:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:07:10.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:10.834 19:27:04 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57126 00:07:10.834 19:27:04 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57126 00:07:10.834 19:27:04 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:10.834 19:27:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57126 ']' 00:07:10.834 19:27:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:10.834 19:27:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:10.834 19:27:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:10.834 19:27:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:10.834 19:27:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:07:10.834 [2024-12-05 19:27:04.077379] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:07:10.834 [2024-12-05 19:27:04.077472] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57126 ] 00:07:10.834 [2024-12-05 19:27:04.218651] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.098 [2024-12-05 19:27:04.279561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.098 [2024-12-05 19:27:04.351934] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:11.358 19:27:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:11.358 19:27:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:07:11.358 19:27:04 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:11.358 19:27:04 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:07:11.358 19:27:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:07:11.358 19:27:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:07:11.358 19:27:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:11.358 19:27:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:11.358 19:27:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:11.358 19:27:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:11.358 19:27:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:11.358 19:27:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:11.358 19:27:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:11.358 19:27:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:07:11.358 19:27:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:07:11.358 [2024-12-05 19:27:04.633273] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:07:11.358 [2024-12-05 19:27:04.633741] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57137 ] 00:07:11.358 [2024-12-05 19:27:04.791086] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.619 [2024-12-05 19:27:04.866553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:11.619 [2024-12-05 19:27:04.866663] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:07:11.619 [2024-12-05 19:27:04.866688] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:07:11.619 [2024-12-05 19:27:04.866703] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:11.619 19:27:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:07:11.619 19:27:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:11.619 19:27:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:07:11.619 19:27:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:07:11.619 19:27:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:07:11.619 19:27:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:11.619 19:27:04 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:11.619 19:27:04 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57126 00:07:11.619 19:27:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57126 ']' 00:07:11.619 19:27:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57126 00:07:11.619 19:27:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:07:11.619 19:27:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:11.619 19:27:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57126 00:07:11.619 killing process with pid 57126 00:07:11.619 19:27:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:11.619 19:27:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:11.619 19:27:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57126' 00:07:11.619 19:27:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57126 00:07:11.619 19:27:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57126 00:07:12.189 ************************************ 00:07:12.189 END TEST exit_on_failed_rpc_init 00:07:12.189 ************************************ 00:07:12.189 00:07:12.189 real 0m1.338s 00:07:12.189 user 0m1.431s 00:07:12.189 sys 0m0.397s 00:07:12.189 19:27:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:12.189 19:27:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:07:12.189 19:27:05 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:07:12.189 00:07:12.189 real 0m13.884s 00:07:12.189 user 0m12.847s 00:07:12.189 sys 0m1.626s 00:07:12.189 19:27:05 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:12.189 ************************************ 00:07:12.189 END TEST skip_rpc 00:07:12.189 ************************************ 00:07:12.189 19:27:05 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:12.189 19:27:05 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:07:12.189 19:27:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:12.189 19:27:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:12.189 19:27:05 -- common/autotest_common.sh@10 -- # set +x 00:07:12.189 ************************************ 00:07:12.189 START TEST rpc_client 00:07:12.189 ************************************ 00:07:12.189 19:27:05 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:07:12.189 * Looking for test storage... 00:07:12.189 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:07:12.189 19:27:05 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:12.189 19:27:05 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:07:12.189 19:27:05 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:12.448 19:27:05 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:12.448 19:27:05 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:12.448 19:27:05 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:12.448 19:27:05 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:12.448 19:27:05 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:07:12.448 19:27:05 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:07:12.448 19:27:05 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:07:12.448 19:27:05 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:07:12.448 19:27:05 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:07:12.448 19:27:05 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:07:12.448 19:27:05 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:07:12.448 19:27:05 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:12.448 19:27:05 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:07:12.448 19:27:05 rpc_client -- scripts/common.sh@345 -- # : 1 00:07:12.448 19:27:05 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:12.448 19:27:05 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:12.448 19:27:05 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:07:12.448 19:27:05 rpc_client -- scripts/common.sh@353 -- # local d=1 00:07:12.448 19:27:05 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:12.448 19:27:05 rpc_client -- scripts/common.sh@355 -- # echo 1 00:07:12.448 19:27:05 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:07:12.448 19:27:05 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:07:12.448 19:27:05 rpc_client -- scripts/common.sh@353 -- # local d=2 00:07:12.448 19:27:05 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:12.448 19:27:05 rpc_client -- scripts/common.sh@355 -- # echo 2 00:07:12.448 19:27:05 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:07:12.448 19:27:05 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:12.448 19:27:05 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:12.448 19:27:05 rpc_client -- scripts/common.sh@368 -- # return 0 00:07:12.448 19:27:05 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:12.448 19:27:05 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:12.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.448 --rc genhtml_branch_coverage=1 00:07:12.448 --rc genhtml_function_coverage=1 00:07:12.448 --rc genhtml_legend=1 00:07:12.448 --rc geninfo_all_blocks=1 00:07:12.448 --rc geninfo_unexecuted_blocks=1 00:07:12.448 00:07:12.448 ' 00:07:12.448 19:27:05 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:12.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.448 --rc genhtml_branch_coverage=1 00:07:12.448 --rc genhtml_function_coverage=1 00:07:12.448 --rc genhtml_legend=1 00:07:12.448 --rc geninfo_all_blocks=1 00:07:12.448 --rc geninfo_unexecuted_blocks=1 00:07:12.448 00:07:12.448 ' 00:07:12.448 19:27:05 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:12.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.448 --rc genhtml_branch_coverage=1 00:07:12.448 --rc genhtml_function_coverage=1 00:07:12.448 --rc genhtml_legend=1 00:07:12.448 --rc geninfo_all_blocks=1 00:07:12.448 --rc geninfo_unexecuted_blocks=1 00:07:12.448 00:07:12.448 ' 00:07:12.448 19:27:05 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:12.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.448 --rc genhtml_branch_coverage=1 00:07:12.448 --rc genhtml_function_coverage=1 00:07:12.448 --rc genhtml_legend=1 00:07:12.448 --rc geninfo_all_blocks=1 00:07:12.448 --rc geninfo_unexecuted_blocks=1 00:07:12.448 00:07:12.448 ' 00:07:12.448 19:27:05 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:07:12.448 OK 00:07:12.448 19:27:05 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:07:12.448 00:07:12.448 real 0m0.228s 00:07:12.448 user 0m0.148s 00:07:12.448 sys 0m0.089s 00:07:12.448 ************************************ 00:07:12.448 END TEST rpc_client 00:07:12.448 ************************************ 00:07:12.448 19:27:05 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:12.448 19:27:05 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:07:12.448 19:27:05 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:07:12.448 19:27:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:12.448 19:27:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:12.448 19:27:05 -- common/autotest_common.sh@10 -- # set +x 00:07:12.448 ************************************ 00:07:12.448 START TEST json_config 00:07:12.448 ************************************ 00:07:12.448 19:27:05 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:07:12.448 19:27:05 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:12.448 19:27:05 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:07:12.448 19:27:05 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:12.708 19:27:05 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:12.708 19:27:05 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:12.708 19:27:05 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:12.708 19:27:05 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:12.708 19:27:05 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:07:12.708 19:27:05 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:07:12.708 19:27:05 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:07:12.708 19:27:05 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:07:12.708 19:27:05 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:07:12.708 19:27:05 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:07:12.708 19:27:05 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:07:12.708 19:27:05 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:12.708 19:27:05 json_config -- scripts/common.sh@344 -- # case "$op" in 00:07:12.708 19:27:05 json_config -- scripts/common.sh@345 -- # : 1 00:07:12.708 19:27:05 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:12.708 19:27:05 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:12.708 19:27:05 json_config -- scripts/common.sh@365 -- # decimal 1 00:07:12.708 19:27:05 json_config -- scripts/common.sh@353 -- # local d=1 00:07:12.708 19:27:05 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:12.708 19:27:05 json_config -- scripts/common.sh@355 -- # echo 1 00:07:12.708 19:27:05 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:07:12.708 19:27:05 json_config -- scripts/common.sh@366 -- # decimal 2 00:07:12.708 19:27:05 json_config -- scripts/common.sh@353 -- # local d=2 00:07:12.708 19:27:05 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:12.708 19:27:05 json_config -- scripts/common.sh@355 -- # echo 2 00:07:12.708 19:27:05 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:07:12.708 19:27:05 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:12.708 19:27:05 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:12.708 19:27:05 json_config -- scripts/common.sh@368 -- # return 0 00:07:12.708 19:27:05 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:12.708 19:27:05 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:12.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.708 --rc genhtml_branch_coverage=1 00:07:12.708 --rc genhtml_function_coverage=1 00:07:12.708 --rc genhtml_legend=1 00:07:12.708 --rc geninfo_all_blocks=1 00:07:12.708 --rc geninfo_unexecuted_blocks=1 00:07:12.708 00:07:12.708 ' 00:07:12.708 19:27:05 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:12.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.708 --rc genhtml_branch_coverage=1 00:07:12.708 --rc genhtml_function_coverage=1 00:07:12.708 --rc genhtml_legend=1 00:07:12.708 --rc geninfo_all_blocks=1 00:07:12.708 --rc geninfo_unexecuted_blocks=1 00:07:12.708 00:07:12.708 ' 00:07:12.708 19:27:05 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:12.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.708 --rc genhtml_branch_coverage=1 00:07:12.708 --rc genhtml_function_coverage=1 00:07:12.708 --rc genhtml_legend=1 00:07:12.708 --rc geninfo_all_blocks=1 00:07:12.708 --rc geninfo_unexecuted_blocks=1 00:07:12.708 00:07:12.708 ' 00:07:12.708 19:27:05 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:12.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.708 --rc genhtml_branch_coverage=1 00:07:12.708 --rc genhtml_function_coverage=1 00:07:12.708 --rc genhtml_legend=1 00:07:12.708 --rc geninfo_all_blocks=1 00:07:12.708 --rc geninfo_unexecuted_blocks=1 00:07:12.708 00:07:12.708 ' 00:07:12.708 19:27:05 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:12.708 19:27:05 json_config -- nvmf/common.sh@7 -- # uname -s 00:07:12.708 19:27:05 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:12.708 19:27:05 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:12.708 19:27:05 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:12.708 19:27:05 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:12.708 19:27:05 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:12.708 19:27:05 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:12.708 19:27:05 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:12.708 19:27:05 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:12.708 19:27:05 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:12.708 19:27:05 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:12.708 19:27:05 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:07:12.708 19:27:05 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:07:12.708 19:27:05 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:12.708 19:27:05 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:12.708 19:27:05 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:12.708 19:27:05 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:12.708 19:27:05 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:12.708 19:27:05 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:07:12.708 19:27:05 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:12.708 19:27:05 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:12.708 19:27:05 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:12.708 19:27:05 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.708 19:27:05 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.708 19:27:05 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.709 19:27:05 json_config -- paths/export.sh@5 -- # export PATH 00:07:12.709 19:27:05 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.709 19:27:05 json_config -- nvmf/common.sh@51 -- # : 0 00:07:12.709 19:27:05 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:12.709 19:27:05 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:12.709 19:27:05 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:12.709 19:27:05 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:12.709 19:27:05 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:12.709 19:27:05 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:12.709 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:12.709 19:27:05 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:12.709 19:27:05 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:12.709 19:27:05 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:12.709 19:27:05 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:07:12.709 19:27:05 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:07:12.709 19:27:05 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:07:12.709 19:27:05 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:07:12.709 19:27:05 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:07:12.709 19:27:05 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:07:12.709 INFO: JSON configuration test init 00:07:12.709 19:27:05 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:07:12.709 19:27:05 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:07:12.709 19:27:05 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:07:12.709 19:27:05 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:07:12.709 19:27:05 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:07:12.709 19:27:05 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:07:12.709 19:27:05 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:07:12.709 19:27:05 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:07:12.709 19:27:05 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:07:12.709 19:27:05 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:07:12.709 19:27:05 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:07:12.709 19:27:05 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:07:12.709 19:27:05 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:12.709 19:27:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:12.709 19:27:05 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:07:12.709 19:27:05 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:12.709 19:27:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:12.709 19:27:05 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:07:12.709 19:27:05 json_config -- json_config/common.sh@9 -- # local app=target 00:07:12.709 19:27:05 json_config -- json_config/common.sh@10 -- # shift 00:07:12.709 19:27:05 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:12.709 19:27:05 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:12.709 19:27:05 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:07:12.709 19:27:05 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:12.709 19:27:05 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:12.709 19:27:05 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=57276 00:07:12.709 19:27:05 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:07:12.709 Waiting for target to run... 00:07:12.709 19:27:05 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:12.709 19:27:05 json_config -- json_config/common.sh@25 -- # waitforlisten 57276 /var/tmp/spdk_tgt.sock 00:07:12.709 19:27:05 json_config -- common/autotest_common.sh@835 -- # '[' -z 57276 ']' 00:07:12.709 19:27:05 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:12.709 19:27:05 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:12.709 19:27:05 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:12.709 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:12.709 19:27:05 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:12.709 19:27:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:12.709 [2024-12-05 19:27:06.032171] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:07:12.709 [2024-12-05 19:27:06.032286] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57276 ] 00:07:13.276 [2024-12-05 19:27:06.461683] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.276 [2024-12-05 19:27:06.508529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.534 00:07:13.534 19:27:06 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:13.534 19:27:06 json_config -- common/autotest_common.sh@868 -- # return 0 00:07:13.534 19:27:06 json_config -- json_config/common.sh@26 -- # echo '' 00:07:13.534 19:27:06 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:07:13.534 19:27:06 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:07:13.534 19:27:06 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:13.534 19:27:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:13.534 19:27:06 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:07:13.534 19:27:06 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:07:13.534 19:27:06 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:13.534 19:27:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:13.793 19:27:07 json_config -- json_config/json_config.sh@280 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:07:13.793 19:27:07 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:07:13.793 19:27:07 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:07:14.051 [2024-12-05 19:27:07.321694] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:14.310 19:27:07 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:07:14.310 19:27:07 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:07:14.310 19:27:07 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:14.310 19:27:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:14.310 19:27:07 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:07:14.310 19:27:07 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:07:14.310 19:27:07 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:07:14.310 19:27:07 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:07:14.310 19:27:07 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:07:14.310 19:27:07 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:07:14.310 19:27:07 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:07:14.310 19:27:07 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:07:14.569 19:27:07 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:07:14.569 19:27:07 json_config -- json_config/json_config.sh@51 -- # local get_types 00:07:14.569 19:27:07 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:07:14.569 19:27:07 json_config -- json_config/json_config.sh@54 -- # sort 00:07:14.569 19:27:07 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:07:14.569 19:27:07 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:07:14.569 19:27:07 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:07:14.569 19:27:07 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:07:14.569 19:27:07 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:07:14.569 19:27:07 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:07:14.569 19:27:07 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:14.570 19:27:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:14.570 19:27:07 json_config -- json_config/json_config.sh@62 -- # return 0 00:07:14.570 19:27:07 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:07:14.570 19:27:07 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:07:14.570 19:27:07 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:07:14.570 19:27:07 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:07:14.570 19:27:07 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:07:14.570 19:27:07 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:07:14.570 19:27:07 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:14.570 19:27:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:14.570 19:27:07 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:07:14.570 19:27:07 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:07:14.570 19:27:07 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:07:14.570 19:27:07 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:07:14.570 19:27:07 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:07:14.828 MallocForNvmf0 00:07:14.828 19:27:08 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:07:14.829 19:27:08 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:07:15.087 MallocForNvmf1 00:07:15.087 19:27:08 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:07:15.087 19:27:08 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:07:15.345 [2024-12-05 19:27:08.702452] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:15.345 19:27:08 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:15.345 19:27:08 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:15.603 19:27:09 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:07:15.603 19:27:09 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:07:15.862 19:27:09 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:07:15.862 19:27:09 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:07:16.122 19:27:09 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:07:16.122 19:27:09 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:07:16.381 [2024-12-05 19:27:09.775263] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:07:16.381 19:27:09 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:07:16.381 19:27:09 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:16.381 19:27:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:16.641 19:27:09 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:07:16.641 19:27:09 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:16.641 19:27:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:16.641 19:27:09 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:07:16.641 19:27:09 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:07:16.641 19:27:09 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:07:16.899 MallocBdevForConfigChangeCheck 00:07:16.899 19:27:10 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:07:16.899 19:27:10 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:16.899 19:27:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:16.899 19:27:10 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:07:16.899 19:27:10 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:17.158 INFO: shutting down applications... 00:07:17.158 19:27:10 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:07:17.158 19:27:10 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:07:17.158 19:27:10 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:07:17.158 19:27:10 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:07:17.158 19:27:10 json_config -- json_config/json_config.sh@340 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:07:17.725 Calling clear_iscsi_subsystem 00:07:17.725 Calling clear_nvmf_subsystem 00:07:17.725 Calling clear_nbd_subsystem 00:07:17.725 Calling clear_ublk_subsystem 00:07:17.725 Calling clear_vhost_blk_subsystem 00:07:17.725 Calling clear_vhost_scsi_subsystem 00:07:17.725 Calling clear_bdev_subsystem 00:07:17.725 19:27:10 json_config -- json_config/json_config.sh@344 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:07:17.725 19:27:10 json_config -- json_config/json_config.sh@350 -- # count=100 00:07:17.725 19:27:10 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:07:17.725 19:27:10 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:17.725 19:27:10 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:07:17.725 19:27:10 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:07:17.984 19:27:11 json_config -- json_config/json_config.sh@352 -- # break 00:07:17.984 19:27:11 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:07:17.984 19:27:11 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:07:17.984 19:27:11 json_config -- json_config/common.sh@31 -- # local app=target 00:07:17.984 19:27:11 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:07:17.984 19:27:11 json_config -- json_config/common.sh@35 -- # [[ -n 57276 ]] 00:07:17.984 19:27:11 json_config -- json_config/common.sh@38 -- # kill -SIGINT 57276 00:07:17.984 19:27:11 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:07:17.984 19:27:11 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:17.984 19:27:11 json_config -- json_config/common.sh@41 -- # kill -0 57276 00:07:17.984 19:27:11 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:07:18.552 19:27:11 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:07:18.552 19:27:11 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:18.552 19:27:11 json_config -- json_config/common.sh@41 -- # kill -0 57276 00:07:18.552 SPDK target shutdown done 00:07:18.552 19:27:11 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:07:18.552 19:27:11 json_config -- json_config/common.sh@43 -- # break 00:07:18.552 19:27:11 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:07:18.552 19:27:11 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:07:18.552 INFO: relaunching applications... 00:07:18.552 19:27:11 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:07:18.552 19:27:11 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:18.552 19:27:11 json_config -- json_config/common.sh@9 -- # local app=target 00:07:18.552 19:27:11 json_config -- json_config/common.sh@10 -- # shift 00:07:18.552 19:27:11 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:18.552 19:27:11 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:18.552 Waiting for target to run... 00:07:18.552 19:27:11 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:07:18.552 19:27:11 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:18.552 19:27:11 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:18.552 19:27:11 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=57472 00:07:18.552 19:27:11 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:18.552 19:27:11 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:18.552 19:27:11 json_config -- json_config/common.sh@25 -- # waitforlisten 57472 /var/tmp/spdk_tgt.sock 00:07:18.552 19:27:11 json_config -- common/autotest_common.sh@835 -- # '[' -z 57472 ']' 00:07:18.552 19:27:11 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:18.552 19:27:11 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:18.552 19:27:11 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:18.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:18.552 19:27:11 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:18.552 19:27:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:18.552 [2024-12-05 19:27:11.900590] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:07:18.552 [2024-12-05 19:27:11.900946] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57472 ] 00:07:19.120 [2024-12-05 19:27:12.324273] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.120 [2024-12-05 19:27:12.370239] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.120 [2024-12-05 19:27:12.508904] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:19.379 [2024-12-05 19:27:12.726038] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:19.379 [2024-12-05 19:27:12.758185] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:07:19.637 00:07:19.637 INFO: Checking if target configuration is the same... 00:07:19.637 19:27:12 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:19.637 19:27:12 json_config -- common/autotest_common.sh@868 -- # return 0 00:07:19.638 19:27:12 json_config -- json_config/common.sh@26 -- # echo '' 00:07:19.638 19:27:12 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:07:19.638 19:27:12 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:07:19.638 19:27:12 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:07:19.638 19:27:12 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:19.638 19:27:12 json_config -- json_config/json_config.sh@385 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:19.638 + '[' 2 -ne 2 ']' 00:07:19.638 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:07:19.638 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:07:19.638 + rootdir=/home/vagrant/spdk_repo/spdk 00:07:19.638 +++ basename /dev/fd/62 00:07:19.638 ++ mktemp /tmp/62.XXX 00:07:19.638 + tmp_file_1=/tmp/62.VPO 00:07:19.638 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:19.638 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:07:19.638 + tmp_file_2=/tmp/spdk_tgt_config.json.8HC 00:07:19.638 + ret=0 00:07:19.638 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:07:19.895 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:07:20.154 + diff -u /tmp/62.VPO /tmp/spdk_tgt_config.json.8HC 00:07:20.154 INFO: JSON config files are the same 00:07:20.154 + echo 'INFO: JSON config files are the same' 00:07:20.154 + rm /tmp/62.VPO /tmp/spdk_tgt_config.json.8HC 00:07:20.154 + exit 0 00:07:20.154 INFO: changing configuration and checking if this can be detected... 00:07:20.154 19:27:13 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:07:20.154 19:27:13 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:07:20.154 19:27:13 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:07:20.154 19:27:13 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:07:20.412 19:27:13 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:07:20.412 19:27:13 json_config -- json_config/json_config.sh@394 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:20.412 19:27:13 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:20.412 + '[' 2 -ne 2 ']' 00:07:20.412 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:07:20.412 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:07:20.412 + rootdir=/home/vagrant/spdk_repo/spdk 00:07:20.412 +++ basename /dev/fd/62 00:07:20.412 ++ mktemp /tmp/62.XXX 00:07:20.412 + tmp_file_1=/tmp/62.KMw 00:07:20.412 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:20.412 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:07:20.412 + tmp_file_2=/tmp/spdk_tgt_config.json.TOB 00:07:20.412 + ret=0 00:07:20.412 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:07:20.670 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:07:20.670 + diff -u /tmp/62.KMw /tmp/spdk_tgt_config.json.TOB 00:07:20.929 + ret=1 00:07:20.929 + echo '=== Start of file: /tmp/62.KMw ===' 00:07:20.929 + cat /tmp/62.KMw 00:07:20.929 + echo '=== End of file: /tmp/62.KMw ===' 00:07:20.929 + echo '' 00:07:20.929 + echo '=== Start of file: /tmp/spdk_tgt_config.json.TOB ===' 00:07:20.929 + cat /tmp/spdk_tgt_config.json.TOB 00:07:20.929 + echo '=== End of file: /tmp/spdk_tgt_config.json.TOB ===' 00:07:20.929 + echo '' 00:07:20.929 + rm /tmp/62.KMw /tmp/spdk_tgt_config.json.TOB 00:07:20.929 + exit 1 00:07:20.929 INFO: configuration change detected. 00:07:20.929 19:27:14 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:07:20.929 19:27:14 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:07:20.929 19:27:14 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:07:20.929 19:27:14 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:20.929 19:27:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:20.929 19:27:14 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:07:20.929 19:27:14 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:07:20.929 19:27:14 json_config -- json_config/json_config.sh@324 -- # [[ -n 57472 ]] 00:07:20.929 19:27:14 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:07:20.929 19:27:14 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:07:20.929 19:27:14 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:20.929 19:27:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:20.929 19:27:14 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:07:20.929 19:27:14 json_config -- json_config/json_config.sh@200 -- # uname -s 00:07:20.929 19:27:14 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:07:20.929 19:27:14 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:07:20.929 19:27:14 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:07:20.929 19:27:14 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:07:20.929 19:27:14 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:20.929 19:27:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:20.929 19:27:14 json_config -- json_config/json_config.sh@330 -- # killprocess 57472 00:07:20.929 19:27:14 json_config -- common/autotest_common.sh@954 -- # '[' -z 57472 ']' 00:07:20.929 19:27:14 json_config -- common/autotest_common.sh@958 -- # kill -0 57472 00:07:20.929 19:27:14 json_config -- common/autotest_common.sh@959 -- # uname 00:07:20.929 19:27:14 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:20.929 19:27:14 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57472 00:07:20.929 killing process with pid 57472 00:07:20.929 19:27:14 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:20.929 19:27:14 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:20.929 19:27:14 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57472' 00:07:20.929 19:27:14 json_config -- common/autotest_common.sh@973 -- # kill 57472 00:07:20.929 19:27:14 json_config -- common/autotest_common.sh@978 -- # wait 57472 00:07:21.228 19:27:14 json_config -- json_config/json_config.sh@333 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:21.228 19:27:14 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:07:21.228 19:27:14 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:21.228 19:27:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:21.228 INFO: Success 00:07:21.228 19:27:14 json_config -- json_config/json_config.sh@335 -- # return 0 00:07:21.228 19:27:14 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:07:21.228 00:07:21.228 real 0m8.740s 00:07:21.228 user 0m12.534s 00:07:21.228 sys 0m1.780s 00:07:21.228 ************************************ 00:07:21.228 END TEST json_config 00:07:21.228 ************************************ 00:07:21.228 19:27:14 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:21.228 19:27:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:21.228 19:27:14 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:07:21.228 19:27:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:21.228 19:27:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:21.228 19:27:14 -- common/autotest_common.sh@10 -- # set +x 00:07:21.228 ************************************ 00:07:21.228 START TEST json_config_extra_key 00:07:21.228 ************************************ 00:07:21.228 19:27:14 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:07:21.228 19:27:14 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:21.228 19:27:14 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:07:21.228 19:27:14 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:21.540 19:27:14 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:21.540 19:27:14 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:21.540 19:27:14 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:21.540 19:27:14 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:21.540 19:27:14 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:07:21.540 19:27:14 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:07:21.540 19:27:14 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:07:21.540 19:27:14 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:07:21.540 19:27:14 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:07:21.540 19:27:14 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:07:21.540 19:27:14 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:07:21.540 19:27:14 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:21.540 19:27:14 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:07:21.540 19:27:14 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:07:21.540 19:27:14 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:21.540 19:27:14 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:21.540 19:27:14 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:07:21.540 19:27:14 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:07:21.540 19:27:14 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:21.540 19:27:14 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:07:21.540 19:27:14 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:07:21.540 19:27:14 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:07:21.540 19:27:14 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:07:21.540 19:27:14 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:21.540 19:27:14 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:07:21.540 19:27:14 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:07:21.540 19:27:14 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:21.540 19:27:14 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:21.540 19:27:14 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:07:21.540 19:27:14 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:21.540 19:27:14 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:21.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.540 --rc genhtml_branch_coverage=1 00:07:21.540 --rc genhtml_function_coverage=1 00:07:21.540 --rc genhtml_legend=1 00:07:21.540 --rc geninfo_all_blocks=1 00:07:21.540 --rc geninfo_unexecuted_blocks=1 00:07:21.540 00:07:21.540 ' 00:07:21.540 19:27:14 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:21.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.540 --rc genhtml_branch_coverage=1 00:07:21.540 --rc genhtml_function_coverage=1 00:07:21.540 --rc genhtml_legend=1 00:07:21.540 --rc geninfo_all_blocks=1 00:07:21.541 --rc geninfo_unexecuted_blocks=1 00:07:21.541 00:07:21.541 ' 00:07:21.541 19:27:14 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:21.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.541 --rc genhtml_branch_coverage=1 00:07:21.541 --rc genhtml_function_coverage=1 00:07:21.541 --rc genhtml_legend=1 00:07:21.541 --rc geninfo_all_blocks=1 00:07:21.541 --rc geninfo_unexecuted_blocks=1 00:07:21.541 00:07:21.541 ' 00:07:21.541 19:27:14 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:21.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.541 --rc genhtml_branch_coverage=1 00:07:21.541 --rc genhtml_function_coverage=1 00:07:21.541 --rc genhtml_legend=1 00:07:21.541 --rc geninfo_all_blocks=1 00:07:21.541 --rc geninfo_unexecuted_blocks=1 00:07:21.541 00:07:21.541 ' 00:07:21.541 19:27:14 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:21.541 19:27:14 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:07:21.541 19:27:14 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:21.541 19:27:14 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:21.541 19:27:14 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:21.541 19:27:14 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:21.541 19:27:14 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:21.541 19:27:14 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:21.541 19:27:14 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:21.541 19:27:14 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:21.541 19:27:14 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:21.541 19:27:14 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:21.541 19:27:14 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:07:21.541 19:27:14 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:07:21.541 19:27:14 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:21.541 19:27:14 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:21.541 19:27:14 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:21.541 19:27:14 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:21.541 19:27:14 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:21.541 19:27:14 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:07:21.541 19:27:14 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:21.541 19:27:14 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:21.541 19:27:14 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:21.541 19:27:14 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.541 19:27:14 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.541 19:27:14 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.541 19:27:14 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:07:21.541 19:27:14 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.541 19:27:14 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:07:21.541 19:27:14 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:21.541 19:27:14 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:21.541 19:27:14 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:21.541 19:27:14 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:21.541 19:27:14 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:21.541 19:27:14 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:21.541 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:21.541 19:27:14 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:21.541 19:27:14 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:21.541 19:27:14 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:21.541 19:27:14 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:07:21.541 19:27:14 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:07:21.541 19:27:14 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:07:21.541 19:27:14 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:07:21.541 19:27:14 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:07:21.541 19:27:14 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:07:21.541 INFO: launching applications... 00:07:21.541 19:27:14 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:07:21.541 19:27:14 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:07:21.541 19:27:14 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:07:21.541 19:27:14 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:07:21.541 19:27:14 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:07:21.541 19:27:14 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:07:21.541 19:27:14 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:07:21.541 19:27:14 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:07:21.541 19:27:14 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:21.541 19:27:14 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:21.541 19:27:14 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:07:21.541 19:27:14 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:21.541 19:27:14 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:21.541 19:27:14 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57626 00:07:21.541 19:27:14 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:21.541 Waiting for target to run... 00:07:21.541 19:27:14 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:07:21.541 19:27:14 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57626 /var/tmp/spdk_tgt.sock 00:07:21.541 19:27:14 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57626 ']' 00:07:21.541 19:27:14 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:21.541 19:27:14 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:21.541 19:27:14 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:21.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:21.541 19:27:14 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:21.541 19:27:14 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:21.541 [2024-12-05 19:27:14.790234] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:07:21.541 [2024-12-05 19:27:14.790664] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57626 ] 00:07:21.799 [2024-12-05 19:27:15.223401] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.056 [2024-12-05 19:27:15.266828] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.056 [2024-12-05 19:27:15.298461] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:22.622 00:07:22.622 INFO: shutting down applications... 00:07:22.622 19:27:15 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:22.622 19:27:15 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:07:22.622 19:27:15 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:07:22.622 19:27:15 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:07:22.622 19:27:15 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:07:22.622 19:27:15 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:07:22.622 19:27:15 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:07:22.622 19:27:15 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57626 ]] 00:07:22.622 19:27:15 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57626 00:07:22.622 19:27:15 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:07:22.622 19:27:15 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:22.622 19:27:15 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57626 00:07:22.622 19:27:15 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:22.880 19:27:16 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:22.880 19:27:16 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:22.880 19:27:16 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57626 00:07:22.880 19:27:16 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:07:22.880 19:27:16 json_config_extra_key -- json_config/common.sh@43 -- # break 00:07:22.880 19:27:16 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:07:22.880 19:27:16 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:07:22.880 SPDK target shutdown done 00:07:22.880 19:27:16 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:07:22.880 Success 00:07:22.880 ************************************ 00:07:22.880 END TEST json_config_extra_key 00:07:22.880 ************************************ 00:07:22.880 00:07:22.880 real 0m1.744s 00:07:22.880 user 0m1.608s 00:07:22.880 sys 0m0.443s 00:07:22.880 19:27:16 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:22.880 19:27:16 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:22.880 19:27:16 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:22.880 19:27:16 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:22.880 19:27:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:22.880 19:27:16 -- common/autotest_common.sh@10 -- # set +x 00:07:23.137 ************************************ 00:07:23.137 START TEST alias_rpc 00:07:23.137 ************************************ 00:07:23.137 19:27:16 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:23.137 * Looking for test storage... 00:07:23.137 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:07:23.137 19:27:16 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:23.137 19:27:16 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:07:23.137 19:27:16 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:23.137 19:27:16 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:23.138 19:27:16 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:23.138 19:27:16 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:23.138 19:27:16 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:23.138 19:27:16 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:07:23.138 19:27:16 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:07:23.138 19:27:16 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:07:23.138 19:27:16 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:07:23.138 19:27:16 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:07:23.138 19:27:16 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:07:23.138 19:27:16 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:07:23.138 19:27:16 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:23.138 19:27:16 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:07:23.138 19:27:16 alias_rpc -- scripts/common.sh@345 -- # : 1 00:07:23.138 19:27:16 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:23.138 19:27:16 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:23.138 19:27:16 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:07:23.138 19:27:16 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:07:23.138 19:27:16 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:23.138 19:27:16 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:07:23.138 19:27:16 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:23.138 19:27:16 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:07:23.138 19:27:16 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:07:23.138 19:27:16 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:23.138 19:27:16 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:07:23.138 19:27:16 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:23.138 19:27:16 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:23.138 19:27:16 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:23.138 19:27:16 alias_rpc -- scripts/common.sh@368 -- # return 0 00:07:23.138 19:27:16 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:23.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:23.138 19:27:16 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:23.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.138 --rc genhtml_branch_coverage=1 00:07:23.138 --rc genhtml_function_coverage=1 00:07:23.138 --rc genhtml_legend=1 00:07:23.138 --rc geninfo_all_blocks=1 00:07:23.138 --rc geninfo_unexecuted_blocks=1 00:07:23.138 00:07:23.138 ' 00:07:23.138 19:27:16 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:23.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.138 --rc genhtml_branch_coverage=1 00:07:23.138 --rc genhtml_function_coverage=1 00:07:23.138 --rc genhtml_legend=1 00:07:23.138 --rc geninfo_all_blocks=1 00:07:23.138 --rc geninfo_unexecuted_blocks=1 00:07:23.138 00:07:23.138 ' 00:07:23.138 19:27:16 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:23.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.138 --rc genhtml_branch_coverage=1 00:07:23.138 --rc genhtml_function_coverage=1 00:07:23.138 --rc genhtml_legend=1 00:07:23.138 --rc geninfo_all_blocks=1 00:07:23.138 --rc geninfo_unexecuted_blocks=1 00:07:23.138 00:07:23.138 ' 00:07:23.138 19:27:16 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:23.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.138 --rc genhtml_branch_coverage=1 00:07:23.138 --rc genhtml_function_coverage=1 00:07:23.138 --rc genhtml_legend=1 00:07:23.138 --rc geninfo_all_blocks=1 00:07:23.138 --rc geninfo_unexecuted_blocks=1 00:07:23.138 00:07:23.138 ' 00:07:23.138 19:27:16 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:23.138 19:27:16 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57702 00:07:23.138 19:27:16 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57702 00:07:23.138 19:27:16 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57702 ']' 00:07:23.138 19:27:16 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:23.138 19:27:16 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:23.138 19:27:16 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:23.138 19:27:16 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:23.138 19:27:16 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:23.138 19:27:16 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:23.395 [2024-12-05 19:27:16.576608] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:07:23.395 [2024-12-05 19:27:16.576977] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57702 ] 00:07:23.395 [2024-12-05 19:27:16.719178] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.395 [2024-12-05 19:27:16.781421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.653 [2024-12-05 19:27:16.853073] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:24.220 19:27:17 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:24.220 19:27:17 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:24.220 19:27:17 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:07:24.480 19:27:17 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57702 00:07:24.480 19:27:17 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57702 ']' 00:07:24.480 19:27:17 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57702 00:07:24.480 19:27:17 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:07:24.480 19:27:17 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:24.480 19:27:17 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57702 00:07:24.480 killing process with pid 57702 00:07:24.480 19:27:17 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:24.480 19:27:17 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:24.480 19:27:17 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57702' 00:07:24.480 19:27:17 alias_rpc -- common/autotest_common.sh@973 -- # kill 57702 00:07:24.480 19:27:17 alias_rpc -- common/autotest_common.sh@978 -- # wait 57702 00:07:25.047 ************************************ 00:07:25.047 END TEST alias_rpc 00:07:25.047 ************************************ 00:07:25.047 00:07:25.047 real 0m1.932s 00:07:25.047 user 0m2.188s 00:07:25.047 sys 0m0.455s 00:07:25.047 19:27:18 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:25.047 19:27:18 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:25.047 19:27:18 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:07:25.047 19:27:18 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:07:25.047 19:27:18 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:25.047 19:27:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:25.047 19:27:18 -- common/autotest_common.sh@10 -- # set +x 00:07:25.047 ************************************ 00:07:25.047 START TEST spdkcli_tcp 00:07:25.047 ************************************ 00:07:25.047 19:27:18 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:07:25.047 * Looking for test storage... 00:07:25.047 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:07:25.047 19:27:18 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:25.047 19:27:18 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:07:25.047 19:27:18 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:25.306 19:27:18 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:25.306 19:27:18 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:25.306 19:27:18 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:25.306 19:27:18 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:25.306 19:27:18 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:07:25.306 19:27:18 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:07:25.306 19:27:18 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:07:25.306 19:27:18 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:07:25.306 19:27:18 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:07:25.306 19:27:18 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:07:25.306 19:27:18 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:07:25.306 19:27:18 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:25.306 19:27:18 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:07:25.306 19:27:18 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:07:25.306 19:27:18 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:25.306 19:27:18 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:25.306 19:27:18 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:07:25.306 19:27:18 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:07:25.306 19:27:18 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:25.306 19:27:18 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:07:25.306 19:27:18 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:07:25.306 19:27:18 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:07:25.306 19:27:18 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:07:25.306 19:27:18 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:25.306 19:27:18 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:07:25.306 19:27:18 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:07:25.306 19:27:18 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:25.306 19:27:18 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:25.306 19:27:18 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:07:25.306 19:27:18 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:25.306 19:27:18 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:25.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.306 --rc genhtml_branch_coverage=1 00:07:25.306 --rc genhtml_function_coverage=1 00:07:25.306 --rc genhtml_legend=1 00:07:25.306 --rc geninfo_all_blocks=1 00:07:25.306 --rc geninfo_unexecuted_blocks=1 00:07:25.306 00:07:25.306 ' 00:07:25.306 19:27:18 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:25.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.306 --rc genhtml_branch_coverage=1 00:07:25.306 --rc genhtml_function_coverage=1 00:07:25.306 --rc genhtml_legend=1 00:07:25.306 --rc geninfo_all_blocks=1 00:07:25.306 --rc geninfo_unexecuted_blocks=1 00:07:25.306 00:07:25.306 ' 00:07:25.306 19:27:18 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:25.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.306 --rc genhtml_branch_coverage=1 00:07:25.306 --rc genhtml_function_coverage=1 00:07:25.306 --rc genhtml_legend=1 00:07:25.306 --rc geninfo_all_blocks=1 00:07:25.306 --rc geninfo_unexecuted_blocks=1 00:07:25.306 00:07:25.306 ' 00:07:25.306 19:27:18 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:25.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.306 --rc genhtml_branch_coverage=1 00:07:25.306 --rc genhtml_function_coverage=1 00:07:25.306 --rc genhtml_legend=1 00:07:25.306 --rc geninfo_all_blocks=1 00:07:25.306 --rc geninfo_unexecuted_blocks=1 00:07:25.306 00:07:25.306 ' 00:07:25.306 19:27:18 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:07:25.306 19:27:18 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:07:25.306 19:27:18 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:07:25.306 19:27:18 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:07:25.306 19:27:18 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:07:25.306 19:27:18 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:07:25.306 19:27:18 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:07:25.306 19:27:18 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:25.306 19:27:18 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:25.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:25.306 19:27:18 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57782 00:07:25.306 19:27:18 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:07:25.306 19:27:18 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57782 00:07:25.306 19:27:18 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 57782 ']' 00:07:25.306 19:27:18 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:25.306 19:27:18 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:25.306 19:27:18 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:25.306 19:27:18 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:25.306 19:27:18 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:25.306 [2024-12-05 19:27:18.591108] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:07:25.306 [2024-12-05 19:27:18.591259] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57782 ] 00:07:25.306 [2024-12-05 19:27:18.736132] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:25.565 [2024-12-05 19:27:18.797693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:25.565 [2024-12-05 19:27:18.797703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.565 [2024-12-05 19:27:18.869034] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:26.501 19:27:19 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:26.501 19:27:19 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:07:26.501 19:27:19 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57805 00:07:26.501 19:27:19 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:07:26.501 19:27:19 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:07:26.501 [ 00:07:26.501 "bdev_malloc_delete", 00:07:26.501 "bdev_malloc_create", 00:07:26.501 "bdev_null_resize", 00:07:26.501 "bdev_null_delete", 00:07:26.501 "bdev_null_create", 00:07:26.501 "bdev_nvme_cuse_unregister", 00:07:26.501 "bdev_nvme_cuse_register", 00:07:26.501 "bdev_opal_new_user", 00:07:26.501 "bdev_opal_set_lock_state", 00:07:26.501 "bdev_opal_delete", 00:07:26.501 "bdev_opal_get_info", 00:07:26.501 "bdev_opal_create", 00:07:26.501 "bdev_nvme_opal_revert", 00:07:26.501 "bdev_nvme_opal_init", 00:07:26.501 "bdev_nvme_send_cmd", 00:07:26.501 "bdev_nvme_set_keys", 00:07:26.501 "bdev_nvme_get_path_iostat", 00:07:26.501 "bdev_nvme_get_mdns_discovery_info", 00:07:26.501 "bdev_nvme_stop_mdns_discovery", 00:07:26.501 "bdev_nvme_start_mdns_discovery", 00:07:26.501 "bdev_nvme_set_multipath_policy", 00:07:26.501 "bdev_nvme_set_preferred_path", 00:07:26.501 "bdev_nvme_get_io_paths", 00:07:26.501 "bdev_nvme_remove_error_injection", 00:07:26.501 "bdev_nvme_add_error_injection", 00:07:26.501 "bdev_nvme_get_discovery_info", 00:07:26.501 "bdev_nvme_stop_discovery", 00:07:26.501 "bdev_nvme_start_discovery", 00:07:26.501 "bdev_nvme_get_controller_health_info", 00:07:26.501 "bdev_nvme_disable_controller", 00:07:26.501 "bdev_nvme_enable_controller", 00:07:26.501 "bdev_nvme_reset_controller", 00:07:26.501 "bdev_nvme_get_transport_statistics", 00:07:26.501 "bdev_nvme_apply_firmware", 00:07:26.501 "bdev_nvme_detach_controller", 00:07:26.501 "bdev_nvme_get_controllers", 00:07:26.501 "bdev_nvme_attach_controller", 00:07:26.501 "bdev_nvme_set_hotplug", 00:07:26.501 "bdev_nvme_set_options", 00:07:26.501 "bdev_passthru_delete", 00:07:26.501 "bdev_passthru_create", 00:07:26.501 "bdev_lvol_set_parent_bdev", 00:07:26.501 "bdev_lvol_set_parent", 00:07:26.501 "bdev_lvol_check_shallow_copy", 00:07:26.501 "bdev_lvol_start_shallow_copy", 00:07:26.501 "bdev_lvol_grow_lvstore", 00:07:26.501 "bdev_lvol_get_lvols", 00:07:26.501 "bdev_lvol_get_lvstores", 00:07:26.501 "bdev_lvol_delete", 00:07:26.501 "bdev_lvol_set_read_only", 00:07:26.501 "bdev_lvol_resize", 00:07:26.501 "bdev_lvol_decouple_parent", 00:07:26.501 "bdev_lvol_inflate", 00:07:26.501 "bdev_lvol_rename", 00:07:26.501 "bdev_lvol_clone_bdev", 00:07:26.501 "bdev_lvol_clone", 00:07:26.501 "bdev_lvol_snapshot", 00:07:26.501 "bdev_lvol_create", 00:07:26.501 "bdev_lvol_delete_lvstore", 00:07:26.501 "bdev_lvol_rename_lvstore", 00:07:26.501 "bdev_lvol_create_lvstore", 00:07:26.501 "bdev_raid_set_options", 00:07:26.501 "bdev_raid_remove_base_bdev", 00:07:26.501 "bdev_raid_add_base_bdev", 00:07:26.501 "bdev_raid_delete", 00:07:26.501 "bdev_raid_create", 00:07:26.501 "bdev_raid_get_bdevs", 00:07:26.501 "bdev_error_inject_error", 00:07:26.501 "bdev_error_delete", 00:07:26.501 "bdev_error_create", 00:07:26.501 "bdev_split_delete", 00:07:26.501 "bdev_split_create", 00:07:26.501 "bdev_delay_delete", 00:07:26.501 "bdev_delay_create", 00:07:26.501 "bdev_delay_update_latency", 00:07:26.501 "bdev_zone_block_delete", 00:07:26.501 "bdev_zone_block_create", 00:07:26.501 "blobfs_create", 00:07:26.501 "blobfs_detect", 00:07:26.501 "blobfs_set_cache_size", 00:07:26.501 "bdev_aio_delete", 00:07:26.501 "bdev_aio_rescan", 00:07:26.501 "bdev_aio_create", 00:07:26.501 "bdev_ftl_set_property", 00:07:26.501 "bdev_ftl_get_properties", 00:07:26.501 "bdev_ftl_get_stats", 00:07:26.501 "bdev_ftl_unmap", 00:07:26.501 "bdev_ftl_unload", 00:07:26.501 "bdev_ftl_delete", 00:07:26.501 "bdev_ftl_load", 00:07:26.501 "bdev_ftl_create", 00:07:26.501 "bdev_virtio_attach_controller", 00:07:26.501 "bdev_virtio_scsi_get_devices", 00:07:26.501 "bdev_virtio_detach_controller", 00:07:26.501 "bdev_virtio_blk_set_hotplug", 00:07:26.501 "bdev_iscsi_delete", 00:07:26.501 "bdev_iscsi_create", 00:07:26.501 "bdev_iscsi_set_options", 00:07:26.501 "bdev_uring_delete", 00:07:26.501 "bdev_uring_rescan", 00:07:26.501 "bdev_uring_create", 00:07:26.501 "accel_error_inject_error", 00:07:26.501 "ioat_scan_accel_module", 00:07:26.501 "dsa_scan_accel_module", 00:07:26.501 "iaa_scan_accel_module", 00:07:26.501 "keyring_file_remove_key", 00:07:26.501 "keyring_file_add_key", 00:07:26.501 "keyring_linux_set_options", 00:07:26.501 "fsdev_aio_delete", 00:07:26.501 "fsdev_aio_create", 00:07:26.501 "iscsi_get_histogram", 00:07:26.501 "iscsi_enable_histogram", 00:07:26.501 "iscsi_set_options", 00:07:26.502 "iscsi_get_auth_groups", 00:07:26.502 "iscsi_auth_group_remove_secret", 00:07:26.502 "iscsi_auth_group_add_secret", 00:07:26.502 "iscsi_delete_auth_group", 00:07:26.502 "iscsi_create_auth_group", 00:07:26.502 "iscsi_set_discovery_auth", 00:07:26.502 "iscsi_get_options", 00:07:26.502 "iscsi_target_node_request_logout", 00:07:26.502 "iscsi_target_node_set_redirect", 00:07:26.502 "iscsi_target_node_set_auth", 00:07:26.502 "iscsi_target_node_add_lun", 00:07:26.502 "iscsi_get_stats", 00:07:26.502 "iscsi_get_connections", 00:07:26.502 "iscsi_portal_group_set_auth", 00:07:26.502 "iscsi_start_portal_group", 00:07:26.502 "iscsi_delete_portal_group", 00:07:26.502 "iscsi_create_portal_group", 00:07:26.502 "iscsi_get_portal_groups", 00:07:26.502 "iscsi_delete_target_node", 00:07:26.502 "iscsi_target_node_remove_pg_ig_maps", 00:07:26.502 "iscsi_target_node_add_pg_ig_maps", 00:07:26.502 "iscsi_create_target_node", 00:07:26.502 "iscsi_get_target_nodes", 00:07:26.502 "iscsi_delete_initiator_group", 00:07:26.502 "iscsi_initiator_group_remove_initiators", 00:07:26.502 "iscsi_initiator_group_add_initiators", 00:07:26.502 "iscsi_create_initiator_group", 00:07:26.502 "iscsi_get_initiator_groups", 00:07:26.502 "nvmf_set_crdt", 00:07:26.502 "nvmf_set_config", 00:07:26.502 "nvmf_set_max_subsystems", 00:07:26.502 "nvmf_stop_mdns_prr", 00:07:26.502 "nvmf_publish_mdns_prr", 00:07:26.502 "nvmf_subsystem_get_listeners", 00:07:26.502 "nvmf_subsystem_get_qpairs", 00:07:26.502 "nvmf_subsystem_get_controllers", 00:07:26.502 "nvmf_get_stats", 00:07:26.502 "nvmf_get_transports", 00:07:26.502 "nvmf_create_transport", 00:07:26.502 "nvmf_get_targets", 00:07:26.502 "nvmf_delete_target", 00:07:26.502 "nvmf_create_target", 00:07:26.502 "nvmf_subsystem_allow_any_host", 00:07:26.502 "nvmf_subsystem_set_keys", 00:07:26.502 "nvmf_subsystem_remove_host", 00:07:26.502 "nvmf_subsystem_add_host", 00:07:26.502 "nvmf_ns_remove_host", 00:07:26.502 "nvmf_ns_add_host", 00:07:26.502 "nvmf_subsystem_remove_ns", 00:07:26.502 "nvmf_subsystem_set_ns_ana_group", 00:07:26.502 "nvmf_subsystem_add_ns", 00:07:26.502 "nvmf_subsystem_listener_set_ana_state", 00:07:26.502 "nvmf_discovery_get_referrals", 00:07:26.502 "nvmf_discovery_remove_referral", 00:07:26.502 "nvmf_discovery_add_referral", 00:07:26.502 "nvmf_subsystem_remove_listener", 00:07:26.502 "nvmf_subsystem_add_listener", 00:07:26.502 "nvmf_delete_subsystem", 00:07:26.502 "nvmf_create_subsystem", 00:07:26.502 "nvmf_get_subsystems", 00:07:26.502 "env_dpdk_get_mem_stats", 00:07:26.502 "nbd_get_disks", 00:07:26.502 "nbd_stop_disk", 00:07:26.502 "nbd_start_disk", 00:07:26.502 "ublk_recover_disk", 00:07:26.502 "ublk_get_disks", 00:07:26.502 "ublk_stop_disk", 00:07:26.502 "ublk_start_disk", 00:07:26.502 "ublk_destroy_target", 00:07:26.502 "ublk_create_target", 00:07:26.502 "virtio_blk_create_transport", 00:07:26.502 "virtio_blk_get_transports", 00:07:26.502 "vhost_controller_set_coalescing", 00:07:26.502 "vhost_get_controllers", 00:07:26.502 "vhost_delete_controller", 00:07:26.502 "vhost_create_blk_controller", 00:07:26.502 "vhost_scsi_controller_remove_target", 00:07:26.502 "vhost_scsi_controller_add_target", 00:07:26.502 "vhost_start_scsi_controller", 00:07:26.502 "vhost_create_scsi_controller", 00:07:26.502 "thread_set_cpumask", 00:07:26.502 "scheduler_set_options", 00:07:26.502 "framework_get_governor", 00:07:26.502 "framework_get_scheduler", 00:07:26.502 "framework_set_scheduler", 00:07:26.502 "framework_get_reactors", 00:07:26.502 "thread_get_io_channels", 00:07:26.502 "thread_get_pollers", 00:07:26.502 "thread_get_stats", 00:07:26.502 "framework_monitor_context_switch", 00:07:26.502 "spdk_kill_instance", 00:07:26.502 "log_enable_timestamps", 00:07:26.502 "log_get_flags", 00:07:26.502 "log_clear_flag", 00:07:26.502 "log_set_flag", 00:07:26.502 "log_get_level", 00:07:26.502 "log_set_level", 00:07:26.502 "log_get_print_level", 00:07:26.502 "log_set_print_level", 00:07:26.502 "framework_enable_cpumask_locks", 00:07:26.502 "framework_disable_cpumask_locks", 00:07:26.502 "framework_wait_init", 00:07:26.502 "framework_start_init", 00:07:26.502 "scsi_get_devices", 00:07:26.502 "bdev_get_histogram", 00:07:26.502 "bdev_enable_histogram", 00:07:26.502 "bdev_set_qos_limit", 00:07:26.502 "bdev_set_qd_sampling_period", 00:07:26.502 "bdev_get_bdevs", 00:07:26.502 "bdev_reset_iostat", 00:07:26.502 "bdev_get_iostat", 00:07:26.502 "bdev_examine", 00:07:26.502 "bdev_wait_for_examine", 00:07:26.502 "bdev_set_options", 00:07:26.502 "accel_get_stats", 00:07:26.502 "accel_set_options", 00:07:26.502 "accel_set_driver", 00:07:26.502 "accel_crypto_key_destroy", 00:07:26.502 "accel_crypto_keys_get", 00:07:26.502 "accel_crypto_key_create", 00:07:26.502 "accel_assign_opc", 00:07:26.502 "accel_get_module_info", 00:07:26.502 "accel_get_opc_assignments", 00:07:26.502 "vmd_rescan", 00:07:26.502 "vmd_remove_device", 00:07:26.502 "vmd_enable", 00:07:26.502 "sock_get_default_impl", 00:07:26.502 "sock_set_default_impl", 00:07:26.502 "sock_impl_set_options", 00:07:26.502 "sock_impl_get_options", 00:07:26.502 "iobuf_get_stats", 00:07:26.502 "iobuf_set_options", 00:07:26.502 "keyring_get_keys", 00:07:26.502 "framework_get_pci_devices", 00:07:26.502 "framework_get_config", 00:07:26.502 "framework_get_subsystems", 00:07:26.502 "fsdev_set_opts", 00:07:26.502 "fsdev_get_opts", 00:07:26.502 "trace_get_info", 00:07:26.502 "trace_get_tpoint_group_mask", 00:07:26.502 "trace_disable_tpoint_group", 00:07:26.502 "trace_enable_tpoint_group", 00:07:26.502 "trace_clear_tpoint_mask", 00:07:26.502 "trace_set_tpoint_mask", 00:07:26.502 "notify_get_notifications", 00:07:26.502 "notify_get_types", 00:07:26.502 "spdk_get_version", 00:07:26.502 "rpc_get_methods" 00:07:26.502 ] 00:07:26.502 19:27:19 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:07:26.502 19:27:19 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:26.502 19:27:19 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:26.762 19:27:19 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:07:26.762 19:27:19 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57782 00:07:26.762 19:27:19 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 57782 ']' 00:07:26.762 19:27:19 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 57782 00:07:26.762 19:27:19 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:07:26.762 19:27:19 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:26.762 19:27:19 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57782 00:07:26.762 killing process with pid 57782 00:07:26.762 19:27:19 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:26.762 19:27:19 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:26.762 19:27:19 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57782' 00:07:26.762 19:27:19 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 57782 00:07:26.762 19:27:19 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 57782 00:07:27.020 ************************************ 00:07:27.020 END TEST spdkcli_tcp 00:07:27.020 ************************************ 00:07:27.020 00:07:27.020 real 0m2.059s 00:07:27.020 user 0m3.824s 00:07:27.020 sys 0m0.552s 00:07:27.020 19:27:20 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:27.020 19:27:20 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:27.020 19:27:20 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:27.020 19:27:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:27.020 19:27:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:27.020 19:27:20 -- common/autotest_common.sh@10 -- # set +x 00:07:27.020 ************************************ 00:07:27.020 START TEST dpdk_mem_utility 00:07:27.020 ************************************ 00:07:27.020 19:27:20 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:27.280 * Looking for test storage... 00:07:27.280 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:07:27.280 19:27:20 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:27.280 19:27:20 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:07:27.280 19:27:20 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:27.280 19:27:20 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:27.280 19:27:20 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:27.280 19:27:20 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:27.280 19:27:20 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:27.280 19:27:20 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:07:27.280 19:27:20 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:07:27.280 19:27:20 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:07:27.280 19:27:20 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:07:27.280 19:27:20 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:07:27.280 19:27:20 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:07:27.280 19:27:20 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:07:27.280 19:27:20 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:27.280 19:27:20 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:07:27.280 19:27:20 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:07:27.280 19:27:20 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:27.280 19:27:20 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:27.280 19:27:20 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:07:27.280 19:27:20 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:07:27.280 19:27:20 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:27.280 19:27:20 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:07:27.280 19:27:20 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:07:27.280 19:27:20 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:07:27.280 19:27:20 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:07:27.280 19:27:20 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:27.280 19:27:20 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:07:27.280 19:27:20 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:07:27.280 19:27:20 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:27.280 19:27:20 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:27.280 19:27:20 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:07:27.280 19:27:20 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:27.280 19:27:20 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:27.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.280 --rc genhtml_branch_coverage=1 00:07:27.280 --rc genhtml_function_coverage=1 00:07:27.280 --rc genhtml_legend=1 00:07:27.280 --rc geninfo_all_blocks=1 00:07:27.280 --rc geninfo_unexecuted_blocks=1 00:07:27.280 00:07:27.280 ' 00:07:27.280 19:27:20 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:27.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.280 --rc genhtml_branch_coverage=1 00:07:27.280 --rc genhtml_function_coverage=1 00:07:27.280 --rc genhtml_legend=1 00:07:27.280 --rc geninfo_all_blocks=1 00:07:27.280 --rc geninfo_unexecuted_blocks=1 00:07:27.280 00:07:27.280 ' 00:07:27.280 19:27:20 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:27.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.280 --rc genhtml_branch_coverage=1 00:07:27.280 --rc genhtml_function_coverage=1 00:07:27.280 --rc genhtml_legend=1 00:07:27.280 --rc geninfo_all_blocks=1 00:07:27.280 --rc geninfo_unexecuted_blocks=1 00:07:27.280 00:07:27.280 ' 00:07:27.280 19:27:20 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:27.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.280 --rc genhtml_branch_coverage=1 00:07:27.280 --rc genhtml_function_coverage=1 00:07:27.280 --rc genhtml_legend=1 00:07:27.280 --rc geninfo_all_blocks=1 00:07:27.280 --rc geninfo_unexecuted_blocks=1 00:07:27.280 00:07:27.280 ' 00:07:27.280 19:27:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:07:27.280 19:27:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=57886 00:07:27.280 19:27:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:27.280 19:27:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 57886 00:07:27.280 19:27:20 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 57886 ']' 00:07:27.280 19:27:20 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:27.280 19:27:20 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:27.280 19:27:20 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:27.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:27.280 19:27:20 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:27.280 19:27:20 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:27.280 [2024-12-05 19:27:20.680429] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:07:27.280 [2024-12-05 19:27:20.680804] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57886 ] 00:07:27.538 [2024-12-05 19:27:20.829424] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.538 [2024-12-05 19:27:20.888142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.538 [2024-12-05 19:27:20.966860] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:27.796 19:27:21 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:27.796 19:27:21 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:07:27.796 19:27:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:07:27.796 19:27:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:07:27.796 19:27:21 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.796 19:27:21 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:27.796 { 00:07:27.796 "filename": "/tmp/spdk_mem_dump.txt" 00:07:27.796 } 00:07:27.796 19:27:21 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.796 19:27:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:07:28.058 DPDK memory size 818.000000 MiB in 1 heap(s) 00:07:28.058 1 heaps totaling size 818.000000 MiB 00:07:28.058 size: 818.000000 MiB heap id: 0 00:07:28.058 end heaps---------- 00:07:28.058 9 mempools totaling size 603.782043 MiB 00:07:28.058 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:07:28.058 size: 158.602051 MiB name: PDU_data_out_Pool 00:07:28.058 size: 100.555481 MiB name: bdev_io_57886 00:07:28.058 size: 50.003479 MiB name: msgpool_57886 00:07:28.058 size: 36.509338 MiB name: fsdev_io_57886 00:07:28.058 size: 21.763794 MiB name: PDU_Pool 00:07:28.058 size: 19.513306 MiB name: SCSI_TASK_Pool 00:07:28.058 size: 4.133484 MiB name: evtpool_57886 00:07:28.058 size: 0.026123 MiB name: Session_Pool 00:07:28.058 end mempools------- 00:07:28.058 6 memzones totaling size 4.142822 MiB 00:07:28.058 size: 1.000366 MiB name: RG_ring_0_57886 00:07:28.058 size: 1.000366 MiB name: RG_ring_1_57886 00:07:28.058 size: 1.000366 MiB name: RG_ring_4_57886 00:07:28.058 size: 1.000366 MiB name: RG_ring_5_57886 00:07:28.058 size: 0.125366 MiB name: RG_ring_2_57886 00:07:28.058 size: 0.015991 MiB name: RG_ring_3_57886 00:07:28.058 end memzones------- 00:07:28.058 19:27:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:07:28.058 heap id: 0 total size: 818.000000 MiB number of busy elements: 318 number of free elements: 15 00:07:28.058 list of free elements. size: 10.802307 MiB 00:07:28.058 element at address: 0x200019200000 with size: 0.999878 MiB 00:07:28.058 element at address: 0x200019400000 with size: 0.999878 MiB 00:07:28.058 element at address: 0x200032000000 with size: 0.994446 MiB 00:07:28.058 element at address: 0x200000400000 with size: 0.993958 MiB 00:07:28.058 element at address: 0x200006400000 with size: 0.959839 MiB 00:07:28.058 element at address: 0x200012c00000 with size: 0.944275 MiB 00:07:28.058 element at address: 0x200019600000 with size: 0.936584 MiB 00:07:28.058 element at address: 0x200000200000 with size: 0.717346 MiB 00:07:28.058 element at address: 0x20001ae00000 with size: 0.567505 MiB 00:07:28.058 element at address: 0x20000a600000 with size: 0.488892 MiB 00:07:28.058 element at address: 0x200000c00000 with size: 0.486267 MiB 00:07:28.058 element at address: 0x200019800000 with size: 0.485657 MiB 00:07:28.058 element at address: 0x200003e00000 with size: 0.480286 MiB 00:07:28.058 element at address: 0x200028200000 with size: 0.395752 MiB 00:07:28.058 element at address: 0x200000800000 with size: 0.351746 MiB 00:07:28.058 list of standard malloc elements. size: 199.268799 MiB 00:07:28.058 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:07:28.058 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:07:28.058 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:07:28.058 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:07:28.058 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:07:28.058 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:07:28.058 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:07:28.058 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:07:28.058 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:07:28.058 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:07:28.058 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:07:28.058 element at address: 0x2000004fe740 with size: 0.000183 MiB 00:07:28.058 element at address: 0x2000004fe800 with size: 0.000183 MiB 00:07:28.058 element at address: 0x2000004fe8c0 with size: 0.000183 MiB 00:07:28.058 element at address: 0x2000004fe980 with size: 0.000183 MiB 00:07:28.058 element at address: 0x2000004fea40 with size: 0.000183 MiB 00:07:28.058 element at address: 0x2000004feb00 with size: 0.000183 MiB 00:07:28.058 element at address: 0x2000004febc0 with size: 0.000183 MiB 00:07:28.058 element at address: 0x2000004fec80 with size: 0.000183 MiB 00:07:28.058 element at address: 0x2000004fed40 with size: 0.000183 MiB 00:07:28.058 element at address: 0x2000004fee00 with size: 0.000183 MiB 00:07:28.058 element at address: 0x2000004feec0 with size: 0.000183 MiB 00:07:28.058 element at address: 0x2000004fef80 with size: 0.000183 MiB 00:07:28.058 element at address: 0x2000004ff040 with size: 0.000183 MiB 00:07:28.058 element at address: 0x2000004ff100 with size: 0.000183 MiB 00:07:28.058 element at address: 0x2000004ff1c0 with size: 0.000183 MiB 00:07:28.058 element at address: 0x2000004ff280 with size: 0.000183 MiB 00:07:28.058 element at address: 0x2000004ff340 with size: 0.000183 MiB 00:07:28.058 element at address: 0x2000004ff400 with size: 0.000183 MiB 00:07:28.058 element at address: 0x2000004ff4c0 with size: 0.000183 MiB 00:07:28.058 element at address: 0x2000004ff580 with size: 0.000183 MiB 00:07:28.058 element at address: 0x2000004ff640 with size: 0.000183 MiB 00:07:28.058 element at address: 0x2000004ff700 with size: 0.000183 MiB 00:07:28.058 element at address: 0x2000004ff7c0 with size: 0.000183 MiB 00:07:28.058 element at address: 0x2000004ff880 with size: 0.000183 MiB 00:07:28.058 element at address: 0x2000004ff940 with size: 0.000183 MiB 00:07:28.058 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:07:28.058 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:07:28.058 element at address: 0x2000004ffcc0 with size: 0.000183 MiB 00:07:28.058 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:07:28.058 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:07:28.058 element at address: 0x20000085a0c0 with size: 0.000183 MiB 00:07:28.058 element at address: 0x20000085a2c0 with size: 0.000183 MiB 00:07:28.058 element at address: 0x20000085e580 with size: 0.000183 MiB 00:07:28.058 element at address: 0x20000087e840 with size: 0.000183 MiB 00:07:28.058 element at address: 0x20000087e900 with size: 0.000183 MiB 00:07:28.058 element at address: 0x20000087e9c0 with size: 0.000183 MiB 00:07:28.058 element at address: 0x20000087ea80 with size: 0.000183 MiB 00:07:28.058 element at address: 0x20000087eb40 with size: 0.000183 MiB 00:07:28.058 element at address: 0x20000087ec00 with size: 0.000183 MiB 00:07:28.058 element at address: 0x20000087ecc0 with size: 0.000183 MiB 00:07:28.058 element at address: 0x20000087ed80 with size: 0.000183 MiB 00:07:28.058 element at address: 0x20000087ee40 with size: 0.000183 MiB 00:07:28.058 element at address: 0x20000087ef00 with size: 0.000183 MiB 00:07:28.058 element at address: 0x20000087efc0 with size: 0.000183 MiB 00:07:28.058 element at address: 0x20000087f080 with size: 0.000183 MiB 00:07:28.058 element at address: 0x20000087f140 with size: 0.000183 MiB 00:07:28.058 element at address: 0x20000087f200 with size: 0.000183 MiB 00:07:28.058 element at address: 0x20000087f2c0 with size: 0.000183 MiB 00:07:28.058 element at address: 0x20000087f380 with size: 0.000183 MiB 00:07:28.058 element at address: 0x20000087f440 with size: 0.000183 MiB 00:07:28.058 element at address: 0x20000087f500 with size: 0.000183 MiB 00:07:28.058 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:07:28.058 element at address: 0x20000087f680 with size: 0.000183 MiB 00:07:28.058 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:07:28.058 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:07:28.058 element at address: 0x200000c7c7c0 with size: 0.000183 MiB 00:07:28.058 element at address: 0x200000c7c880 with size: 0.000183 MiB 00:07:28.058 element at address: 0x200000c7c940 with size: 0.000183 MiB 00:07:28.058 element at address: 0x200000c7ca00 with size: 0.000183 MiB 00:07:28.058 element at address: 0x200000c7cac0 with size: 0.000183 MiB 00:07:28.058 element at address: 0x200000c7cb80 with size: 0.000183 MiB 00:07:28.058 element at address: 0x200000c7cc40 with size: 0.000183 MiB 00:07:28.058 element at address: 0x200000c7cd00 with size: 0.000183 MiB 00:07:28.058 element at address: 0x200000c7cdc0 with size: 0.000183 MiB 00:07:28.058 element at address: 0x200000c7ce80 with size: 0.000183 MiB 00:07:28.058 element at address: 0x200000c7cf40 with size: 0.000183 MiB 00:07:28.058 element at address: 0x200000c7d000 with size: 0.000183 MiB 00:07:28.058 element at address: 0x200000c7d0c0 with size: 0.000183 MiB 00:07:28.058 element at address: 0x200000c7d180 with size: 0.000183 MiB 00:07:28.058 element at address: 0x200000c7d240 with size: 0.000183 MiB 00:07:28.058 element at address: 0x200000c7d300 with size: 0.000183 MiB 00:07:28.058 element at address: 0x200000c7d3c0 with size: 0.000183 MiB 00:07:28.058 element at address: 0x200000c7d480 with size: 0.000183 MiB 00:07:28.058 element at address: 0x200000c7d540 with size: 0.000183 MiB 00:07:28.058 element at address: 0x200000c7d600 with size: 0.000183 MiB 00:07:28.058 element at address: 0x200000c7d6c0 with size: 0.000183 MiB 00:07:28.058 element at address: 0x200000c7d780 with size: 0.000183 MiB 00:07:28.058 element at address: 0x200000c7d840 with size: 0.000183 MiB 00:07:28.058 element at address: 0x200000c7d900 with size: 0.000183 MiB 00:07:28.058 element at address: 0x200000c7d9c0 with size: 0.000183 MiB 00:07:28.058 element at address: 0x200000c7da80 with size: 0.000183 MiB 00:07:28.058 element at address: 0x200000c7db40 with size: 0.000183 MiB 00:07:28.058 element at address: 0x200000c7dc00 with size: 0.000183 MiB 00:07:28.058 element at address: 0x200000c7dcc0 with size: 0.000183 MiB 00:07:28.058 element at address: 0x200000c7dd80 with size: 0.000183 MiB 00:07:28.058 element at address: 0x200000c7de40 with size: 0.000183 MiB 00:07:28.058 element at address: 0x200000c7df00 with size: 0.000183 MiB 00:07:28.059 element at address: 0x200000c7dfc0 with size: 0.000183 MiB 00:07:28.059 element at address: 0x200000c7e080 with size: 0.000183 MiB 00:07:28.059 element at address: 0x200000c7e140 with size: 0.000183 MiB 00:07:28.059 element at address: 0x200000c7e200 with size: 0.000183 MiB 00:07:28.059 element at address: 0x200000c7e2c0 with size: 0.000183 MiB 00:07:28.059 element at address: 0x200000c7e380 with size: 0.000183 MiB 00:07:28.059 element at address: 0x200000c7e440 with size: 0.000183 MiB 00:07:28.059 element at address: 0x200000c7e500 with size: 0.000183 MiB 00:07:28.059 element at address: 0x200000c7e5c0 with size: 0.000183 MiB 00:07:28.059 element at address: 0x200000c7e680 with size: 0.000183 MiB 00:07:28.059 element at address: 0x200000c7e740 with size: 0.000183 MiB 00:07:28.059 element at address: 0x200000c7e800 with size: 0.000183 MiB 00:07:28.059 element at address: 0x200000c7e8c0 with size: 0.000183 MiB 00:07:28.059 element at address: 0x200000c7e980 with size: 0.000183 MiB 00:07:28.059 element at address: 0x200000c7ea40 with size: 0.000183 MiB 00:07:28.059 element at address: 0x200000c7eb00 with size: 0.000183 MiB 00:07:28.059 element at address: 0x200000c7ebc0 with size: 0.000183 MiB 00:07:28.059 element at address: 0x200000c7ec80 with size: 0.000183 MiB 00:07:28.059 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:07:28.059 element at address: 0x200000cff000 with size: 0.000183 MiB 00:07:28.059 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:07:28.059 element at address: 0x200003e7af40 with size: 0.000183 MiB 00:07:28.059 element at address: 0x200003e7b000 with size: 0.000183 MiB 00:07:28.059 element at address: 0x200003e7b0c0 with size: 0.000183 MiB 00:07:28.059 element at address: 0x200003e7b180 with size: 0.000183 MiB 00:07:28.059 element at address: 0x200003e7b240 with size: 0.000183 MiB 00:07:28.059 element at address: 0x200003e7b300 with size: 0.000183 MiB 00:07:28.059 element at address: 0x200003e7b3c0 with size: 0.000183 MiB 00:07:28.059 element at address: 0x200003e7b480 with size: 0.000183 MiB 00:07:28.059 element at address: 0x200003e7b540 with size: 0.000183 MiB 00:07:28.059 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:07:28.059 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:07:28.059 element at address: 0x200003efb980 with size: 0.000183 MiB 00:07:28.059 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:07:28.059 element at address: 0x20000a67d280 with size: 0.000183 MiB 00:07:28.059 element at address: 0x20000a67d340 with size: 0.000183 MiB 00:07:28.059 element at address: 0x20000a67d400 with size: 0.000183 MiB 00:07:28.059 element at address: 0x20000a67d4c0 with size: 0.000183 MiB 00:07:28.059 element at address: 0x20000a67d580 with size: 0.000183 MiB 00:07:28.059 element at address: 0x20000a67d640 with size: 0.000183 MiB 00:07:28.059 element at address: 0x20000a67d700 with size: 0.000183 MiB 00:07:28.059 element at address: 0x20000a67d7c0 with size: 0.000183 MiB 00:07:28.059 element at address: 0x20000a67d880 with size: 0.000183 MiB 00:07:28.059 element at address: 0x20000a67d940 with size: 0.000183 MiB 00:07:28.059 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:07:28.059 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:07:28.059 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:07:28.059 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:07:28.059 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:07:28.059 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:07:28.059 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:07:28.059 element at address: 0x20001ae91480 with size: 0.000183 MiB 00:07:28.059 element at address: 0x20001ae91540 with size: 0.000183 MiB 00:07:28.059 element at address: 0x20001ae91600 with size: 0.000183 MiB 00:07:28.059 element at address: 0x20001ae916c0 with size: 0.000183 MiB 00:07:28.059 element at address: 0x20001ae91780 with size: 0.000183 MiB 00:07:28.059 element at address: 0x20001ae91840 with size: 0.000183 MiB 00:07:28.059 element at address: 0x20001ae91900 with size: 0.000183 MiB 00:07:28.059 element at address: 0x20001ae919c0 with size: 0.000183 MiB 00:07:28.059 element at address: 0x20001ae91a80 with size: 0.000183 MiB 00:07:28.059 element at address: 0x20001ae91b40 with size: 0.000183 MiB 00:07:28.059 element at address: 0x20001ae91c00 with size: 0.000183 MiB 00:07:28.059 element at address: 0x20001ae91cc0 with size: 0.000183 MiB 00:07:28.059 element at address: 0x20001ae91d80 with size: 0.000183 MiB 00:07:28.059 element at address: 0x20001ae91e40 with size: 0.000183 MiB 00:07:28.059 element at address: 0x20001ae91f00 with size: 0.000183 MiB 00:07:28.059 element at address: 0x20001ae91fc0 with size: 0.000183 MiB 00:07:28.059 element at address: 0x20001ae92080 with size: 0.000183 MiB 00:07:28.059 element at address: 0x20001ae92140 with size: 0.000183 MiB 00:07:28.059 element at address: 0x20001ae92200 with size: 0.000183 MiB 00:07:28.059 element at address: 0x20001ae922c0 with size: 0.000183 MiB 00:07:28.059 element at address: 0x20001ae92380 with size: 0.000183 MiB 00:07:28.059 element at address: 0x20001ae92440 with size: 0.000183 MiB 00:07:28.059 element at address: 0x20001ae92500 with size: 0.000183 MiB 00:07:28.059 element at address: 0x20001ae925c0 with size: 0.000183 MiB 00:07:28.059 element at address: 0x20001ae92680 with size: 0.000183 MiB 00:07:28.059 element at address: 0x20001ae92740 with size: 0.000183 MiB 00:07:28.059 element at address: 0x20001ae92800 with size: 0.000183 MiB 00:07:28.059 element at address: 0x20001ae928c0 with size: 0.000183 MiB 00:07:28.059 element at address: 0x20001ae92980 with size: 0.000183 MiB 00:07:28.059 element at address: 0x20001ae92a40 with size: 0.000183 MiB 00:07:28.059 element at address: 0x20001ae92b00 with size: 0.000183 MiB 00:07:28.059 element at address: 0x20001ae92bc0 with size: 0.000183 MiB 00:07:28.059 element at address: 0x20001ae92c80 with size: 0.000183 MiB 00:07:28.059 element at address: 0x20001ae92d40 with size: 0.000183 MiB 00:07:28.059 element at address: 0x20001ae92e00 with size: 0.000183 MiB 00:07:28.059 element at address: 0x20001ae92ec0 with size: 0.000183 MiB 00:07:28.059 element at address: 0x20001ae92f80 with size: 0.000183 MiB 00:07:28.059 element at address: 0x20001ae93040 with size: 0.000183 MiB 00:07:28.059 element at address: 0x20001ae93100 with size: 0.000183 MiB 00:07:28.059 element at address: 0x20001ae931c0 with size: 0.000183 MiB 00:07:28.059 element at address: 0x20001ae93280 with size: 0.000183 MiB 00:07:28.059 element at address: 0x20001ae93340 with size: 0.000183 MiB 00:07:28.059 element at address: 0x20001ae93400 with size: 0.000183 MiB 00:07:28.059 element at address: 0x20001ae934c0 with size: 0.000183 MiB 00:07:28.059 element at address: 0x20001ae93580 with size: 0.000183 MiB 00:07:28.059 element at address: 0x20001ae93640 with size: 0.000183 MiB 00:07:28.059 element at address: 0x20001ae93700 with size: 0.000183 MiB 00:07:28.059 element at address: 0x20001ae937c0 with size: 0.000183 MiB 00:07:28.059 element at address: 0x20001ae93880 with size: 0.000183 MiB 00:07:28.059 element at address: 0x20001ae93940 with size: 0.000183 MiB 00:07:28.059 element at address: 0x20001ae93a00 with size: 0.000183 MiB 00:07:28.059 element at address: 0x20001ae93ac0 with size: 0.000183 MiB 00:07:28.059 element at address: 0x20001ae93b80 with size: 0.000183 MiB 00:07:28.059 element at address: 0x20001ae93c40 with size: 0.000183 MiB 00:07:28.059 element at address: 0x20001ae93d00 with size: 0.000183 MiB 00:07:28.059 element at address: 0x20001ae93dc0 with size: 0.000183 MiB 00:07:28.059 element at address: 0x20001ae93e80 with size: 0.000183 MiB 00:07:28.059 element at address: 0x20001ae93f40 with size: 0.000183 MiB 00:07:28.059 element at address: 0x20001ae94000 with size: 0.000183 MiB 00:07:28.059 element at address: 0x20001ae940c0 with size: 0.000183 MiB 00:07:28.059 element at address: 0x20001ae94180 with size: 0.000183 MiB 00:07:28.059 element at address: 0x20001ae94240 with size: 0.000183 MiB 00:07:28.059 element at address: 0x20001ae94300 with size: 0.000183 MiB 00:07:28.059 element at address: 0x20001ae943c0 with size: 0.000183 MiB 00:07:28.059 element at address: 0x20001ae94480 with size: 0.000183 MiB 00:07:28.059 element at address: 0x20001ae94540 with size: 0.000183 MiB 00:07:28.059 element at address: 0x20001ae94600 with size: 0.000183 MiB 00:07:28.059 element at address: 0x20001ae946c0 with size: 0.000183 MiB 00:07:28.059 element at address: 0x20001ae94780 with size: 0.000183 MiB 00:07:28.059 element at address: 0x20001ae94840 with size: 0.000183 MiB 00:07:28.059 element at address: 0x20001ae94900 with size: 0.000183 MiB 00:07:28.059 element at address: 0x20001ae949c0 with size: 0.000183 MiB 00:07:28.059 element at address: 0x20001ae94a80 with size: 0.000183 MiB 00:07:28.059 element at address: 0x20001ae94b40 with size: 0.000183 MiB 00:07:28.059 element at address: 0x20001ae94c00 with size: 0.000183 MiB 00:07:28.059 element at address: 0x20001ae94cc0 with size: 0.000183 MiB 00:07:28.059 element at address: 0x20001ae94d80 with size: 0.000183 MiB 00:07:28.059 element at address: 0x20001ae94e40 with size: 0.000183 MiB 00:07:28.059 element at address: 0x20001ae94f00 with size: 0.000183 MiB 00:07:28.059 element at address: 0x20001ae94fc0 with size: 0.000183 MiB 00:07:28.059 element at address: 0x20001ae95080 with size: 0.000183 MiB 00:07:28.059 element at address: 0x20001ae95140 with size: 0.000183 MiB 00:07:28.059 element at address: 0x20001ae95200 with size: 0.000183 MiB 00:07:28.059 element at address: 0x20001ae952c0 with size: 0.000183 MiB 00:07:28.059 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:07:28.059 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:07:28.059 element at address: 0x200028265500 with size: 0.000183 MiB 00:07:28.059 element at address: 0x2000282655c0 with size: 0.000183 MiB 00:07:28.059 element at address: 0x20002826c1c0 with size: 0.000183 MiB 00:07:28.059 element at address: 0x20002826c3c0 with size: 0.000183 MiB 00:07:28.059 element at address: 0x20002826c480 with size: 0.000183 MiB 00:07:28.059 element at address: 0x20002826c540 with size: 0.000183 MiB 00:07:28.059 element at address: 0x20002826c600 with size: 0.000183 MiB 00:07:28.059 element at address: 0x20002826c6c0 with size: 0.000183 MiB 00:07:28.059 element at address: 0x20002826c780 with size: 0.000183 MiB 00:07:28.059 element at address: 0x20002826c840 with size: 0.000183 MiB 00:07:28.059 element at address: 0x20002826c900 with size: 0.000183 MiB 00:07:28.059 element at address: 0x20002826c9c0 with size: 0.000183 MiB 00:07:28.059 element at address: 0x20002826ca80 with size: 0.000183 MiB 00:07:28.059 element at address: 0x20002826cb40 with size: 0.000183 MiB 00:07:28.059 element at address: 0x20002826cc00 with size: 0.000183 MiB 00:07:28.059 element at address: 0x20002826ccc0 with size: 0.000183 MiB 00:07:28.059 element at address: 0x20002826cd80 with size: 0.000183 MiB 00:07:28.060 element at address: 0x20002826ce40 with size: 0.000183 MiB 00:07:28.060 element at address: 0x20002826cf00 with size: 0.000183 MiB 00:07:28.060 element at address: 0x20002826cfc0 with size: 0.000183 MiB 00:07:28.060 element at address: 0x20002826d080 with size: 0.000183 MiB 00:07:28.060 element at address: 0x20002826d140 with size: 0.000183 MiB 00:07:28.060 element at address: 0x20002826d200 with size: 0.000183 MiB 00:07:28.060 element at address: 0x20002826d2c0 with size: 0.000183 MiB 00:07:28.060 element at address: 0x20002826d380 with size: 0.000183 MiB 00:07:28.060 element at address: 0x20002826d440 with size: 0.000183 MiB 00:07:28.060 element at address: 0x20002826d500 with size: 0.000183 MiB 00:07:28.060 element at address: 0x20002826d5c0 with size: 0.000183 MiB 00:07:28.060 element at address: 0x20002826d680 with size: 0.000183 MiB 00:07:28.060 element at address: 0x20002826d740 with size: 0.000183 MiB 00:07:28.060 element at address: 0x20002826d800 with size: 0.000183 MiB 00:07:28.060 element at address: 0x20002826d8c0 with size: 0.000183 MiB 00:07:28.060 element at address: 0x20002826d980 with size: 0.000183 MiB 00:07:28.060 element at address: 0x20002826da40 with size: 0.000183 MiB 00:07:28.060 element at address: 0x20002826db00 with size: 0.000183 MiB 00:07:28.060 element at address: 0x20002826dbc0 with size: 0.000183 MiB 00:07:28.060 element at address: 0x20002826dc80 with size: 0.000183 MiB 00:07:28.060 element at address: 0x20002826dd40 with size: 0.000183 MiB 00:07:28.060 element at address: 0x20002826de00 with size: 0.000183 MiB 00:07:28.060 element at address: 0x20002826dec0 with size: 0.000183 MiB 00:07:28.060 element at address: 0x20002826df80 with size: 0.000183 MiB 00:07:28.060 element at address: 0x20002826e040 with size: 0.000183 MiB 00:07:28.060 element at address: 0x20002826e100 with size: 0.000183 MiB 00:07:28.060 element at address: 0x20002826e1c0 with size: 0.000183 MiB 00:07:28.060 element at address: 0x20002826e280 with size: 0.000183 MiB 00:07:28.060 element at address: 0x20002826e340 with size: 0.000183 MiB 00:07:28.060 element at address: 0x20002826e400 with size: 0.000183 MiB 00:07:28.060 element at address: 0x20002826e4c0 with size: 0.000183 MiB 00:07:28.060 element at address: 0x20002826e580 with size: 0.000183 MiB 00:07:28.060 element at address: 0x20002826e640 with size: 0.000183 MiB 00:07:28.060 element at address: 0x20002826e700 with size: 0.000183 MiB 00:07:28.060 element at address: 0x20002826e7c0 with size: 0.000183 MiB 00:07:28.060 element at address: 0x20002826e880 with size: 0.000183 MiB 00:07:28.060 element at address: 0x20002826e940 with size: 0.000183 MiB 00:07:28.060 element at address: 0x20002826ea00 with size: 0.000183 MiB 00:07:28.060 element at address: 0x20002826eac0 with size: 0.000183 MiB 00:07:28.060 element at address: 0x20002826eb80 with size: 0.000183 MiB 00:07:28.060 element at address: 0x20002826ec40 with size: 0.000183 MiB 00:07:28.060 element at address: 0x20002826ed00 with size: 0.000183 MiB 00:07:28.060 element at address: 0x20002826edc0 with size: 0.000183 MiB 00:07:28.060 element at address: 0x20002826ee80 with size: 0.000183 MiB 00:07:28.060 element at address: 0x20002826ef40 with size: 0.000183 MiB 00:07:28.060 element at address: 0x20002826f000 with size: 0.000183 MiB 00:07:28.060 element at address: 0x20002826f0c0 with size: 0.000183 MiB 00:07:28.060 element at address: 0x20002826f180 with size: 0.000183 MiB 00:07:28.060 element at address: 0x20002826f240 with size: 0.000183 MiB 00:07:28.060 element at address: 0x20002826f300 with size: 0.000183 MiB 00:07:28.060 element at address: 0x20002826f3c0 with size: 0.000183 MiB 00:07:28.060 element at address: 0x20002826f480 with size: 0.000183 MiB 00:07:28.060 element at address: 0x20002826f540 with size: 0.000183 MiB 00:07:28.060 element at address: 0x20002826f600 with size: 0.000183 MiB 00:07:28.060 element at address: 0x20002826f6c0 with size: 0.000183 MiB 00:07:28.060 element at address: 0x20002826f780 with size: 0.000183 MiB 00:07:28.060 element at address: 0x20002826f840 with size: 0.000183 MiB 00:07:28.060 element at address: 0x20002826f900 with size: 0.000183 MiB 00:07:28.060 element at address: 0x20002826f9c0 with size: 0.000183 MiB 00:07:28.060 element at address: 0x20002826fa80 with size: 0.000183 MiB 00:07:28.060 element at address: 0x20002826fb40 with size: 0.000183 MiB 00:07:28.060 element at address: 0x20002826fc00 with size: 0.000183 MiB 00:07:28.060 element at address: 0x20002826fcc0 with size: 0.000183 MiB 00:07:28.060 element at address: 0x20002826fd80 with size: 0.000183 MiB 00:07:28.060 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:07:28.060 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:07:28.060 list of memzone associated elements. size: 607.928894 MiB 00:07:28.060 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:07:28.060 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:07:28.060 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:07:28.060 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:07:28.060 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:07:28.060 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_57886_0 00:07:28.060 element at address: 0x200000dff380 with size: 48.003052 MiB 00:07:28.060 associated memzone info: size: 48.002930 MiB name: MP_msgpool_57886_0 00:07:28.060 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:07:28.060 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_57886_0 00:07:28.060 element at address: 0x2000199be940 with size: 20.255554 MiB 00:07:28.060 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:07:28.060 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:07:28.060 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:07:28.060 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:07:28.060 associated memzone info: size: 3.000122 MiB name: MP_evtpool_57886_0 00:07:28.060 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:07:28.060 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_57886 00:07:28.060 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:07:28.060 associated memzone info: size: 1.007996 MiB name: MP_evtpool_57886 00:07:28.060 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:07:28.060 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:07:28.060 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:07:28.060 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:07:28.060 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:07:28.060 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:07:28.060 element at address: 0x200003efba40 with size: 1.008118 MiB 00:07:28.060 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:07:28.060 element at address: 0x200000cff180 with size: 1.000488 MiB 00:07:28.060 associated memzone info: size: 1.000366 MiB name: RG_ring_0_57886 00:07:28.060 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:07:28.060 associated memzone info: size: 1.000366 MiB name: RG_ring_1_57886 00:07:28.060 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:07:28.060 associated memzone info: size: 1.000366 MiB name: RG_ring_4_57886 00:07:28.060 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:07:28.060 associated memzone info: size: 1.000366 MiB name: RG_ring_5_57886 00:07:28.060 element at address: 0x20000087f740 with size: 0.500488 MiB 00:07:28.060 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_57886 00:07:28.060 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:07:28.060 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_57886 00:07:28.060 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:07:28.060 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:07:28.060 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:07:28.060 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:07:28.060 element at address: 0x20001987c540 with size: 0.250488 MiB 00:07:28.060 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:07:28.060 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:07:28.060 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_57886 00:07:28.060 element at address: 0x20000085e640 with size: 0.125488 MiB 00:07:28.060 associated memzone info: size: 0.125366 MiB name: RG_ring_2_57886 00:07:28.060 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:07:28.060 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:07:28.060 element at address: 0x200028265680 with size: 0.023743 MiB 00:07:28.060 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:07:28.060 element at address: 0x20000085a380 with size: 0.016113 MiB 00:07:28.060 associated memzone info: size: 0.015991 MiB name: RG_ring_3_57886 00:07:28.060 element at address: 0x20002826b7c0 with size: 0.002441 MiB 00:07:28.060 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:07:28.060 element at address: 0x2000004ffb80 with size: 0.000305 MiB 00:07:28.060 associated memzone info: size: 0.000183 MiB name: MP_msgpool_57886 00:07:28.060 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:07:28.060 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_57886 00:07:28.060 element at address: 0x20000085a180 with size: 0.000305 MiB 00:07:28.060 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_57886 00:07:28.060 element at address: 0x20002826c280 with size: 0.000305 MiB 00:07:28.060 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:07:28.060 19:27:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:07:28.060 19:27:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 57886 00:07:28.060 19:27:21 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 57886 ']' 00:07:28.060 19:27:21 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 57886 00:07:28.060 19:27:21 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:07:28.060 19:27:21 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:28.060 19:27:21 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57886 00:07:28.060 killing process with pid 57886 00:07:28.060 19:27:21 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:28.060 19:27:21 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:28.060 19:27:21 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57886' 00:07:28.060 19:27:21 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 57886 00:07:28.060 19:27:21 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 57886 00:07:28.653 00:07:28.653 real 0m1.349s 00:07:28.653 user 0m1.307s 00:07:28.653 sys 0m0.427s 00:07:28.653 ************************************ 00:07:28.653 END TEST dpdk_mem_utility 00:07:28.653 ************************************ 00:07:28.653 19:27:21 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:28.653 19:27:21 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:28.653 19:27:21 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:07:28.653 19:27:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:28.653 19:27:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:28.653 19:27:21 -- common/autotest_common.sh@10 -- # set +x 00:07:28.653 ************************************ 00:07:28.653 START TEST event 00:07:28.653 ************************************ 00:07:28.653 19:27:21 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:07:28.653 * Looking for test storage... 00:07:28.653 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:07:28.653 19:27:21 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:28.653 19:27:21 event -- common/autotest_common.sh@1711 -- # lcov --version 00:07:28.653 19:27:21 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:28.653 19:27:21 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:28.653 19:27:21 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:28.653 19:27:21 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:28.653 19:27:21 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:28.653 19:27:21 event -- scripts/common.sh@336 -- # IFS=.-: 00:07:28.653 19:27:21 event -- scripts/common.sh@336 -- # read -ra ver1 00:07:28.653 19:27:21 event -- scripts/common.sh@337 -- # IFS=.-: 00:07:28.653 19:27:21 event -- scripts/common.sh@337 -- # read -ra ver2 00:07:28.653 19:27:21 event -- scripts/common.sh@338 -- # local 'op=<' 00:07:28.653 19:27:21 event -- scripts/common.sh@340 -- # ver1_l=2 00:07:28.653 19:27:22 event -- scripts/common.sh@341 -- # ver2_l=1 00:07:28.653 19:27:22 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:28.653 19:27:22 event -- scripts/common.sh@344 -- # case "$op" in 00:07:28.653 19:27:22 event -- scripts/common.sh@345 -- # : 1 00:07:28.653 19:27:22 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:28.653 19:27:22 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:28.653 19:27:22 event -- scripts/common.sh@365 -- # decimal 1 00:07:28.653 19:27:22 event -- scripts/common.sh@353 -- # local d=1 00:07:28.653 19:27:22 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:28.653 19:27:22 event -- scripts/common.sh@355 -- # echo 1 00:07:28.653 19:27:22 event -- scripts/common.sh@365 -- # ver1[v]=1 00:07:28.653 19:27:22 event -- scripts/common.sh@366 -- # decimal 2 00:07:28.653 19:27:22 event -- scripts/common.sh@353 -- # local d=2 00:07:28.653 19:27:22 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:28.653 19:27:22 event -- scripts/common.sh@355 -- # echo 2 00:07:28.653 19:27:22 event -- scripts/common.sh@366 -- # ver2[v]=2 00:07:28.653 19:27:22 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:28.653 19:27:22 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:28.653 19:27:22 event -- scripts/common.sh@368 -- # return 0 00:07:28.653 19:27:22 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:28.653 19:27:22 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:28.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.653 --rc genhtml_branch_coverage=1 00:07:28.653 --rc genhtml_function_coverage=1 00:07:28.653 --rc genhtml_legend=1 00:07:28.653 --rc geninfo_all_blocks=1 00:07:28.653 --rc geninfo_unexecuted_blocks=1 00:07:28.653 00:07:28.653 ' 00:07:28.653 19:27:22 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:28.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.653 --rc genhtml_branch_coverage=1 00:07:28.653 --rc genhtml_function_coverage=1 00:07:28.653 --rc genhtml_legend=1 00:07:28.653 --rc geninfo_all_blocks=1 00:07:28.653 --rc geninfo_unexecuted_blocks=1 00:07:28.653 00:07:28.653 ' 00:07:28.653 19:27:22 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:28.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.653 --rc genhtml_branch_coverage=1 00:07:28.653 --rc genhtml_function_coverage=1 00:07:28.653 --rc genhtml_legend=1 00:07:28.653 --rc geninfo_all_blocks=1 00:07:28.653 --rc geninfo_unexecuted_blocks=1 00:07:28.653 00:07:28.653 ' 00:07:28.653 19:27:22 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:28.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.653 --rc genhtml_branch_coverage=1 00:07:28.653 --rc genhtml_function_coverage=1 00:07:28.653 --rc genhtml_legend=1 00:07:28.653 --rc geninfo_all_blocks=1 00:07:28.653 --rc geninfo_unexecuted_blocks=1 00:07:28.653 00:07:28.653 ' 00:07:28.653 19:27:22 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:28.653 19:27:22 event -- bdev/nbd_common.sh@6 -- # set -e 00:07:28.653 19:27:22 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:28.653 19:27:22 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:07:28.653 19:27:22 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:28.653 19:27:22 event -- common/autotest_common.sh@10 -- # set +x 00:07:28.653 ************************************ 00:07:28.653 START TEST event_perf 00:07:28.653 ************************************ 00:07:28.653 19:27:22 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:28.653 Running I/O for 1 seconds...[2024-12-05 19:27:22.050325] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:07:28.653 [2024-12-05 19:27:22.050622] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57959 ] 00:07:28.911 [2024-12-05 19:27:22.200701] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:28.911 [2024-12-05 19:27:22.265757] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:28.912 [2024-12-05 19:27:22.265893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:28.912 [2024-12-05 19:27:22.266020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.912 [2024-12-05 19:27:22.266020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:30.289 Running I/O for 1 seconds... 00:07:30.289 lcore 0: 200539 00:07:30.289 lcore 1: 200539 00:07:30.289 lcore 2: 200538 00:07:30.289 lcore 3: 200539 00:07:30.289 done. 00:07:30.289 00:07:30.289 real 0m1.290s 00:07:30.289 user 0m4.107s 00:07:30.289 sys 0m0.057s 00:07:30.289 19:27:23 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:30.289 ************************************ 00:07:30.289 END TEST event_perf 00:07:30.289 ************************************ 00:07:30.289 19:27:23 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:07:30.289 19:27:23 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:07:30.289 19:27:23 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:30.289 19:27:23 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:30.289 19:27:23 event -- common/autotest_common.sh@10 -- # set +x 00:07:30.289 ************************************ 00:07:30.289 START TEST event_reactor 00:07:30.289 ************************************ 00:07:30.289 19:27:23 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:07:30.289 [2024-12-05 19:27:23.394070] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:07:30.289 [2024-12-05 19:27:23.394459] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57992 ] 00:07:30.289 [2024-12-05 19:27:23.538901] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.289 [2024-12-05 19:27:23.598590] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.225 test_start 00:07:31.225 oneshot 00:07:31.225 tick 100 00:07:31.225 tick 100 00:07:31.225 tick 250 00:07:31.225 tick 100 00:07:31.225 tick 100 00:07:31.225 tick 100 00:07:31.225 tick 250 00:07:31.225 tick 500 00:07:31.225 tick 100 00:07:31.225 tick 100 00:07:31.225 tick 250 00:07:31.225 tick 100 00:07:31.225 tick 100 00:07:31.225 test_end 00:07:31.225 00:07:31.225 real 0m1.269s 00:07:31.225 user 0m1.119s 00:07:31.225 sys 0m0.043s 00:07:31.225 19:27:24 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:31.225 19:27:24 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:07:31.225 ************************************ 00:07:31.225 END TEST event_reactor 00:07:31.225 ************************************ 00:07:31.485 19:27:24 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:31.485 19:27:24 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:31.485 19:27:24 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:31.485 19:27:24 event -- common/autotest_common.sh@10 -- # set +x 00:07:31.485 ************************************ 00:07:31.485 START TEST event_reactor_perf 00:07:31.485 ************************************ 00:07:31.485 19:27:24 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:31.485 [2024-12-05 19:27:24.719617] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:07:31.485 [2024-12-05 19:27:24.719940] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58028 ] 00:07:31.485 [2024-12-05 19:27:24.868626] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.743 [2024-12-05 19:27:24.926022] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.678 test_start 00:07:32.678 test_end 00:07:32.678 Performance: 408708 events per second 00:07:32.678 00:07:32.678 real 0m1.278s 00:07:32.678 user 0m1.127s 00:07:32.678 sys 0m0.044s 00:07:32.678 19:27:25 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:32.678 ************************************ 00:07:32.678 END TEST event_reactor_perf 00:07:32.678 ************************************ 00:07:32.678 19:27:25 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:07:32.678 19:27:26 event -- event/event.sh@49 -- # uname -s 00:07:32.678 19:27:26 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:07:32.678 19:27:26 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:07:32.678 19:27:26 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:32.678 19:27:26 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:32.678 19:27:26 event -- common/autotest_common.sh@10 -- # set +x 00:07:32.678 ************************************ 00:07:32.678 START TEST event_scheduler 00:07:32.678 ************************************ 00:07:32.678 19:27:26 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:07:32.937 * Looking for test storage... 00:07:32.937 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:07:32.937 19:27:26 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:32.937 19:27:26 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:07:32.937 19:27:26 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:32.937 19:27:26 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:32.937 19:27:26 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:32.937 19:27:26 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:32.937 19:27:26 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:32.937 19:27:26 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:07:32.937 19:27:26 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:07:32.937 19:27:26 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:07:32.937 19:27:26 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:07:32.937 19:27:26 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:07:32.937 19:27:26 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:07:32.937 19:27:26 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:07:32.937 19:27:26 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:32.937 19:27:26 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:07:32.937 19:27:26 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:07:32.937 19:27:26 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:32.937 19:27:26 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:32.937 19:27:26 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:07:32.937 19:27:26 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:07:32.937 19:27:26 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:32.937 19:27:26 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:07:32.937 19:27:26 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:07:32.937 19:27:26 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:07:32.937 19:27:26 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:07:32.937 19:27:26 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:32.937 19:27:26 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:07:32.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:32.937 19:27:26 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:07:32.937 19:27:26 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:32.937 19:27:26 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:32.937 19:27:26 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:07:32.937 19:27:26 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:32.937 19:27:26 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:32.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:32.937 --rc genhtml_branch_coverage=1 00:07:32.937 --rc genhtml_function_coverage=1 00:07:32.937 --rc genhtml_legend=1 00:07:32.937 --rc geninfo_all_blocks=1 00:07:32.937 --rc geninfo_unexecuted_blocks=1 00:07:32.937 00:07:32.937 ' 00:07:32.937 19:27:26 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:32.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:32.937 --rc genhtml_branch_coverage=1 00:07:32.937 --rc genhtml_function_coverage=1 00:07:32.937 --rc genhtml_legend=1 00:07:32.937 --rc geninfo_all_blocks=1 00:07:32.937 --rc geninfo_unexecuted_blocks=1 00:07:32.937 00:07:32.937 ' 00:07:32.937 19:27:26 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:32.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:32.937 --rc genhtml_branch_coverage=1 00:07:32.937 --rc genhtml_function_coverage=1 00:07:32.937 --rc genhtml_legend=1 00:07:32.937 --rc geninfo_all_blocks=1 00:07:32.937 --rc geninfo_unexecuted_blocks=1 00:07:32.937 00:07:32.937 ' 00:07:32.937 19:27:26 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:32.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:32.937 --rc genhtml_branch_coverage=1 00:07:32.937 --rc genhtml_function_coverage=1 00:07:32.937 --rc genhtml_legend=1 00:07:32.937 --rc geninfo_all_blocks=1 00:07:32.937 --rc geninfo_unexecuted_blocks=1 00:07:32.937 00:07:32.938 ' 00:07:32.938 19:27:26 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:07:32.938 19:27:26 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58097 00:07:32.938 19:27:26 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:07:32.938 19:27:26 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58097 00:07:32.938 19:27:26 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:07:32.938 19:27:26 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58097 ']' 00:07:32.938 19:27:26 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:32.938 19:27:26 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:32.938 19:27:26 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:32.938 19:27:26 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:32.938 19:27:26 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:32.938 [2024-12-05 19:27:26.348962] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:07:32.938 [2024-12-05 19:27:26.349814] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58097 ] 00:07:33.196 [2024-12-05 19:27:26.508474] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:33.196 [2024-12-05 19:27:26.582616] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.196 [2024-12-05 19:27:26.582925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:33.196 [2024-12-05 19:27:26.582788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:33.196 [2024-12-05 19:27:26.582906] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:34.144 19:27:27 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:34.144 19:27:27 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:07:34.144 19:27:27 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:07:34.144 19:27:27 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.144 19:27:27 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:34.144 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:34.144 POWER: Cannot set governor of lcore 0 to userspace 00:07:34.144 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:34.144 POWER: Cannot set governor of lcore 0 to performance 00:07:34.144 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:34.144 POWER: Cannot set governor of lcore 0 to userspace 00:07:34.144 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:34.144 POWER: Cannot set governor of lcore 0 to userspace 00:07:34.144 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:07:34.144 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:07:34.144 POWER: Unable to set Power Management Environment for lcore 0 00:07:34.144 [2024-12-05 19:27:27.418803] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:07:34.144 [2024-12-05 19:27:27.418869] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:07:34.144 [2024-12-05 19:27:27.418946] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:07:34.144 [2024-12-05 19:27:27.419015] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:07:34.144 [2024-12-05 19:27:27.419094] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:07:34.144 [2024-12-05 19:27:27.419317] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:07:34.144 19:27:27 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.144 19:27:27 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:07:34.144 19:27:27 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.144 19:27:27 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:34.144 [2024-12-05 19:27:27.475551] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:34.144 [2024-12-05 19:27:27.513977] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:07:34.144 19:27:27 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.144 19:27:27 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:07:34.144 19:27:27 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:34.144 19:27:27 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:34.144 19:27:27 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:34.144 ************************************ 00:07:34.144 START TEST scheduler_create_thread 00:07:34.144 ************************************ 00:07:34.144 19:27:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:07:34.144 19:27:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:07:34.144 19:27:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.144 19:27:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:34.144 2 00:07:34.144 19:27:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.144 19:27:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:07:34.144 19:27:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.144 19:27:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:34.144 3 00:07:34.144 19:27:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.144 19:27:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:07:34.144 19:27:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.144 19:27:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:34.144 4 00:07:34.144 19:27:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.144 19:27:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:07:34.144 19:27:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.144 19:27:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:34.144 5 00:07:34.144 19:27:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.144 19:27:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:07:34.144 19:27:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.145 19:27:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:34.145 6 00:07:34.145 19:27:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.145 19:27:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:07:34.145 19:27:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.145 19:27:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:34.145 7 00:07:34.437 19:27:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.437 19:27:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:07:34.437 19:27:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.437 19:27:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:34.437 8 00:07:34.437 19:27:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.437 19:27:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:07:34.437 19:27:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.437 19:27:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:34.437 9 00:07:34.437 19:27:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.437 19:27:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:07:34.437 19:27:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.437 19:27:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:34.437 10 00:07:34.437 19:27:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.437 19:27:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:07:34.437 19:27:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.437 19:27:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:34.437 19:27:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.437 19:27:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:07:34.437 19:27:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:07:34.437 19:27:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.437 19:27:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:34.437 19:27:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.437 19:27:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:07:34.437 19:27:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.437 19:27:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:35.811 19:27:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.811 19:27:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:07:35.811 19:27:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:07:35.811 19:27:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.811 19:27:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:36.746 ************************************ 00:07:36.746 END TEST scheduler_create_thread 00:07:36.746 ************************************ 00:07:36.746 19:27:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.746 00:07:36.746 real 0m2.611s 00:07:36.746 user 0m0.020s 00:07:36.746 sys 0m0.007s 00:07:36.746 19:27:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:36.746 19:27:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:37.005 19:27:30 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:07:37.005 19:27:30 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58097 00:07:37.005 19:27:30 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58097 ']' 00:07:37.006 19:27:30 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58097 00:07:37.006 19:27:30 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:07:37.006 19:27:30 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:37.006 19:27:30 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58097 00:07:37.006 killing process with pid 58097 00:07:37.006 19:27:30 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:07:37.006 19:27:30 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:07:37.006 19:27:30 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58097' 00:07:37.006 19:27:30 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58097 00:07:37.006 19:27:30 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58097 00:07:37.264 [2024-12-05 19:27:30.618825] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:07:37.524 00:07:37.524 real 0m4.796s 00:07:37.524 user 0m9.182s 00:07:37.524 sys 0m0.429s 00:07:37.524 19:27:30 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:37.524 ************************************ 00:07:37.524 END TEST event_scheduler 00:07:37.524 ************************************ 00:07:37.524 19:27:30 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:37.524 19:27:30 event -- event/event.sh@51 -- # modprobe -n nbd 00:07:37.524 19:27:30 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:07:37.524 19:27:30 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:37.524 19:27:30 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:37.524 19:27:30 event -- common/autotest_common.sh@10 -- # set +x 00:07:37.524 ************************************ 00:07:37.524 START TEST app_repeat 00:07:37.524 ************************************ 00:07:37.524 19:27:30 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:07:37.524 19:27:30 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:37.524 19:27:30 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:37.524 19:27:30 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:07:37.524 19:27:30 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:37.524 19:27:30 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:07:37.524 19:27:30 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:07:37.524 19:27:30 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:07:37.524 Process app_repeat pid: 58202 00:07:37.524 spdk_app_start Round 0 00:07:37.524 19:27:30 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58202 00:07:37.524 19:27:30 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:07:37.524 19:27:30 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:07:37.524 19:27:30 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58202' 00:07:37.524 19:27:30 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:37.524 19:27:30 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:07:37.524 19:27:30 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58202 /var/tmp/spdk-nbd.sock 00:07:37.524 19:27:30 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58202 ']' 00:07:37.524 19:27:30 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:37.524 19:27:30 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:37.524 19:27:30 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:37.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:37.524 19:27:30 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:37.524 19:27:30 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:37.524 [2024-12-05 19:27:30.918922] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:07:37.524 [2024-12-05 19:27:30.919861] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58202 ] 00:07:37.783 [2024-12-05 19:27:31.069667] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:37.783 [2024-12-05 19:27:31.180857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:37.783 [2024-12-05 19:27:31.180882] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.043 [2024-12-05 19:27:31.258123] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:38.043 19:27:31 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:38.043 19:27:31 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:38.043 19:27:31 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:38.302 Malloc0 00:07:38.302 19:27:31 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:38.560 Malloc1 00:07:38.561 19:27:31 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:38.561 19:27:31 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:38.561 19:27:31 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:38.561 19:27:31 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:38.561 19:27:31 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:38.561 19:27:31 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:38.561 19:27:31 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:38.561 19:27:31 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:38.561 19:27:31 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:38.561 19:27:31 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:38.561 19:27:31 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:38.561 19:27:31 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:38.561 19:27:31 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:38.561 19:27:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:38.561 19:27:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:38.561 19:27:31 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:38.819 /dev/nbd0 00:07:38.819 19:27:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:38.819 19:27:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:38.819 19:27:32 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:38.819 19:27:32 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:38.819 19:27:32 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:38.819 19:27:32 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:38.819 19:27:32 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:38.819 19:27:32 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:38.819 19:27:32 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:38.819 19:27:32 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:38.819 19:27:32 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:38.819 1+0 records in 00:07:38.819 1+0 records out 00:07:38.819 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00091722 s, 4.5 MB/s 00:07:38.819 19:27:32 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:38.819 19:27:32 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:38.819 19:27:32 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:38.819 19:27:32 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:38.819 19:27:32 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:38.819 19:27:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:38.819 19:27:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:38.819 19:27:32 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:39.468 /dev/nbd1 00:07:39.468 19:27:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:39.468 19:27:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:39.468 19:27:32 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:39.468 19:27:32 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:39.468 19:27:32 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:39.468 19:27:32 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:39.468 19:27:32 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:39.468 19:27:32 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:39.468 19:27:32 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:39.468 19:27:32 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:39.468 19:27:32 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:39.468 1+0 records in 00:07:39.468 1+0 records out 00:07:39.468 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000251865 s, 16.3 MB/s 00:07:39.468 19:27:32 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:39.468 19:27:32 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:39.468 19:27:32 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:39.468 19:27:32 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:39.468 19:27:32 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:39.468 19:27:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:39.468 19:27:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:39.468 19:27:32 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:39.468 19:27:32 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:39.468 19:27:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:39.468 19:27:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:39.468 { 00:07:39.468 "nbd_device": "/dev/nbd0", 00:07:39.468 "bdev_name": "Malloc0" 00:07:39.468 }, 00:07:39.468 { 00:07:39.468 "nbd_device": "/dev/nbd1", 00:07:39.468 "bdev_name": "Malloc1" 00:07:39.468 } 00:07:39.468 ]' 00:07:39.468 19:27:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:39.468 { 00:07:39.468 "nbd_device": "/dev/nbd0", 00:07:39.468 "bdev_name": "Malloc0" 00:07:39.468 }, 00:07:39.468 { 00:07:39.468 "nbd_device": "/dev/nbd1", 00:07:39.468 "bdev_name": "Malloc1" 00:07:39.468 } 00:07:39.468 ]' 00:07:39.468 19:27:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:39.727 19:27:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:39.727 /dev/nbd1' 00:07:39.727 19:27:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:39.727 /dev/nbd1' 00:07:39.727 19:27:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:39.727 19:27:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:39.727 19:27:32 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:39.727 19:27:32 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:39.727 19:27:32 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:39.727 19:27:32 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:39.727 19:27:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:39.727 19:27:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:39.727 19:27:32 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:39.727 19:27:32 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:39.727 19:27:32 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:39.727 19:27:32 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:39.727 256+0 records in 00:07:39.727 256+0 records out 00:07:39.727 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00860877 s, 122 MB/s 00:07:39.727 19:27:32 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:39.727 19:27:32 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:39.727 256+0 records in 00:07:39.727 256+0 records out 00:07:39.727 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0204105 s, 51.4 MB/s 00:07:39.727 19:27:32 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:39.727 19:27:32 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:39.727 256+0 records in 00:07:39.727 256+0 records out 00:07:39.727 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0261922 s, 40.0 MB/s 00:07:39.727 19:27:32 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:39.727 19:27:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:39.727 19:27:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:39.727 19:27:32 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:39.727 19:27:32 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:39.727 19:27:32 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:39.727 19:27:32 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:39.727 19:27:32 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:39.727 19:27:33 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:39.727 19:27:33 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:39.727 19:27:33 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:39.727 19:27:33 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:39.727 19:27:33 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:39.727 19:27:33 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:39.727 19:27:33 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:39.727 19:27:33 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:39.727 19:27:33 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:39.727 19:27:33 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:39.727 19:27:33 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:39.986 19:27:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:39.986 19:27:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:39.986 19:27:33 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:39.986 19:27:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:39.986 19:27:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:39.986 19:27:33 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:39.986 19:27:33 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:39.986 19:27:33 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:39.986 19:27:33 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:39.986 19:27:33 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:40.245 19:27:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:40.245 19:27:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:40.245 19:27:33 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:40.245 19:27:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:40.245 19:27:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:40.245 19:27:33 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:40.245 19:27:33 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:40.245 19:27:33 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:40.245 19:27:33 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:40.245 19:27:33 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:40.245 19:27:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:40.504 19:27:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:40.504 19:27:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:40.504 19:27:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:40.504 19:27:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:40.504 19:27:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:40.504 19:27:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:40.504 19:27:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:40.504 19:27:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:40.504 19:27:33 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:40.504 19:27:33 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:40.504 19:27:33 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:40.504 19:27:33 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:40.504 19:27:33 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:41.071 19:27:34 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:41.071 [2024-12-05 19:27:34.383458] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:41.072 [2024-12-05 19:27:34.430330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:41.072 [2024-12-05 19:27:34.430342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.072 [2024-12-05 19:27:34.482730] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:41.072 [2024-12-05 19:27:34.482827] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:41.072 [2024-12-05 19:27:34.482840] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:44.353 spdk_app_start Round 1 00:07:44.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:44.353 19:27:37 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:44.353 19:27:37 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:07:44.353 19:27:37 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58202 /var/tmp/spdk-nbd.sock 00:07:44.353 19:27:37 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58202 ']' 00:07:44.353 19:27:37 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:44.353 19:27:37 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:44.353 19:27:37 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:44.353 19:27:37 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:44.353 19:27:37 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:44.353 19:27:37 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:44.353 19:27:37 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:44.353 19:27:37 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:44.353 Malloc0 00:07:44.612 19:27:37 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:44.878 Malloc1 00:07:44.878 19:27:38 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:44.878 19:27:38 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:44.878 19:27:38 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:44.878 19:27:38 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:44.878 19:27:38 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:44.878 19:27:38 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:44.878 19:27:38 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:44.878 19:27:38 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:44.878 19:27:38 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:44.878 19:27:38 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:44.878 19:27:38 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:44.878 19:27:38 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:44.878 19:27:38 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:44.878 19:27:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:44.878 19:27:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:44.878 19:27:38 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:45.172 /dev/nbd0 00:07:45.172 19:27:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:45.172 19:27:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:45.172 19:27:38 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:45.172 19:27:38 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:45.172 19:27:38 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:45.172 19:27:38 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:45.172 19:27:38 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:45.172 19:27:38 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:45.172 19:27:38 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:45.172 19:27:38 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:45.172 19:27:38 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:45.172 1+0 records in 00:07:45.172 1+0 records out 00:07:45.172 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000553933 s, 7.4 MB/s 00:07:45.172 19:27:38 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:45.172 19:27:38 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:45.172 19:27:38 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:45.172 19:27:38 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:45.172 19:27:38 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:45.172 19:27:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:45.172 19:27:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:45.172 19:27:38 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:45.430 /dev/nbd1 00:07:45.430 19:27:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:45.430 19:27:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:45.430 19:27:38 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:45.430 19:27:38 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:45.430 19:27:38 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:45.430 19:27:38 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:45.430 19:27:38 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:45.430 19:27:38 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:45.430 19:27:38 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:45.430 19:27:38 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:45.430 19:27:38 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:45.430 1+0 records in 00:07:45.430 1+0 records out 00:07:45.430 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000210637 s, 19.4 MB/s 00:07:45.430 19:27:38 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:45.430 19:27:38 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:45.430 19:27:38 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:45.430 19:27:38 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:45.430 19:27:38 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:45.430 19:27:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:45.430 19:27:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:45.430 19:27:38 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:45.430 19:27:38 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:45.430 19:27:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:45.688 19:27:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:45.688 { 00:07:45.688 "nbd_device": "/dev/nbd0", 00:07:45.688 "bdev_name": "Malloc0" 00:07:45.688 }, 00:07:45.688 { 00:07:45.688 "nbd_device": "/dev/nbd1", 00:07:45.688 "bdev_name": "Malloc1" 00:07:45.688 } 00:07:45.688 ]' 00:07:45.688 19:27:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:45.688 { 00:07:45.688 "nbd_device": "/dev/nbd0", 00:07:45.688 "bdev_name": "Malloc0" 00:07:45.688 }, 00:07:45.688 { 00:07:45.688 "nbd_device": "/dev/nbd1", 00:07:45.688 "bdev_name": "Malloc1" 00:07:45.688 } 00:07:45.688 ]' 00:07:45.688 19:27:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:45.688 19:27:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:45.688 /dev/nbd1' 00:07:45.688 19:27:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:45.688 /dev/nbd1' 00:07:45.688 19:27:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:45.688 19:27:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:45.688 19:27:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:45.688 19:27:39 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:45.688 19:27:39 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:45.688 19:27:39 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:45.688 19:27:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:45.688 19:27:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:45.688 19:27:39 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:45.688 19:27:39 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:45.688 19:27:39 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:45.688 19:27:39 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:45.688 256+0 records in 00:07:45.688 256+0 records out 00:07:45.688 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00623661 s, 168 MB/s 00:07:45.688 19:27:39 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:45.688 19:27:39 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:45.688 256+0 records in 00:07:45.688 256+0 records out 00:07:45.688 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0219412 s, 47.8 MB/s 00:07:45.688 19:27:39 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:45.688 19:27:39 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:45.688 256+0 records in 00:07:45.688 256+0 records out 00:07:45.688 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0236258 s, 44.4 MB/s 00:07:45.688 19:27:39 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:45.688 19:27:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:45.688 19:27:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:45.688 19:27:39 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:45.688 19:27:39 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:45.688 19:27:39 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:45.688 19:27:39 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:45.688 19:27:39 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:45.688 19:27:39 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:45.688 19:27:39 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:45.688 19:27:39 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:45.688 19:27:39 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:45.688 19:27:39 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:45.688 19:27:39 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:45.688 19:27:39 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:45.688 19:27:39 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:45.688 19:27:39 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:45.688 19:27:39 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:45.688 19:27:39 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:46.256 19:27:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:46.256 19:27:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:46.256 19:27:39 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:46.256 19:27:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:46.256 19:27:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:46.256 19:27:39 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:46.256 19:27:39 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:46.256 19:27:39 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:46.256 19:27:39 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:46.256 19:27:39 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:46.515 19:27:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:46.515 19:27:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:46.515 19:27:39 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:46.515 19:27:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:46.515 19:27:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:46.515 19:27:39 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:46.515 19:27:39 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:46.515 19:27:39 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:46.515 19:27:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:46.515 19:27:39 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:46.515 19:27:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:46.773 19:27:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:46.773 19:27:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:46.773 19:27:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:46.773 19:27:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:46.773 19:27:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:46.773 19:27:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:46.773 19:27:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:46.773 19:27:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:46.773 19:27:40 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:46.773 19:27:40 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:46.773 19:27:40 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:46.773 19:27:40 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:46.773 19:27:40 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:47.339 19:27:40 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:47.339 [2024-12-05 19:27:40.660694] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:47.339 [2024-12-05 19:27:40.696352] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:47.339 [2024-12-05 19:27:40.696364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.339 [2024-12-05 19:27:40.752495] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:47.339 [2024-12-05 19:27:40.752595] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:47.339 [2024-12-05 19:27:40.752610] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:50.662 19:27:43 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:50.662 spdk_app_start Round 2 00:07:50.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:50.662 19:27:43 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:50.662 19:27:43 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58202 /var/tmp/spdk-nbd.sock 00:07:50.662 19:27:43 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58202 ']' 00:07:50.662 19:27:43 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:50.662 19:27:43 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:50.662 19:27:43 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:50.662 19:27:43 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:50.662 19:27:43 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:50.662 19:27:43 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:50.662 19:27:43 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:50.662 19:27:43 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:50.920 Malloc0 00:07:50.920 19:27:44 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:51.180 Malloc1 00:07:51.180 19:27:44 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:51.180 19:27:44 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:51.180 19:27:44 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:51.180 19:27:44 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:51.180 19:27:44 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:51.180 19:27:44 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:51.180 19:27:44 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:51.180 19:27:44 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:51.180 19:27:44 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:51.180 19:27:44 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:51.180 19:27:44 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:51.180 19:27:44 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:51.180 19:27:44 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:51.180 19:27:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:51.180 19:27:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:51.180 19:27:44 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:51.443 /dev/nbd0 00:07:51.443 19:27:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:51.443 19:27:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:51.443 19:27:44 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:51.443 19:27:44 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:51.443 19:27:44 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:51.443 19:27:44 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:51.443 19:27:44 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:51.443 19:27:44 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:51.443 19:27:44 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:51.443 19:27:44 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:51.443 19:27:44 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:51.443 1+0 records in 00:07:51.443 1+0 records out 00:07:51.443 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000222916 s, 18.4 MB/s 00:07:51.443 19:27:44 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:51.443 19:27:44 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:51.443 19:27:44 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:51.443 19:27:44 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:51.443 19:27:44 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:51.443 19:27:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:51.443 19:27:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:51.443 19:27:44 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:51.702 /dev/nbd1 00:07:51.702 19:27:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:51.702 19:27:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:51.702 19:27:45 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:51.702 19:27:45 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:51.702 19:27:45 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:51.702 19:27:45 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:51.702 19:27:45 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:51.702 19:27:45 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:51.702 19:27:45 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:51.702 19:27:45 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:51.702 19:27:45 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:51.702 1+0 records in 00:07:51.702 1+0 records out 00:07:51.702 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000514033 s, 8.0 MB/s 00:07:51.702 19:27:45 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:51.702 19:27:45 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:51.702 19:27:45 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:51.702 19:27:45 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:51.702 19:27:45 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:51.702 19:27:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:51.702 19:27:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:51.702 19:27:45 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:51.702 19:27:45 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:51.702 19:27:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:51.960 19:27:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:51.960 { 00:07:51.960 "nbd_device": "/dev/nbd0", 00:07:51.960 "bdev_name": "Malloc0" 00:07:51.960 }, 00:07:51.960 { 00:07:51.960 "nbd_device": "/dev/nbd1", 00:07:51.960 "bdev_name": "Malloc1" 00:07:51.960 } 00:07:51.960 ]' 00:07:51.960 19:27:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:51.960 { 00:07:51.960 "nbd_device": "/dev/nbd0", 00:07:51.960 "bdev_name": "Malloc0" 00:07:51.960 }, 00:07:51.960 { 00:07:51.960 "nbd_device": "/dev/nbd1", 00:07:51.960 "bdev_name": "Malloc1" 00:07:51.960 } 00:07:51.960 ]' 00:07:51.960 19:27:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:52.218 19:27:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:52.218 /dev/nbd1' 00:07:52.218 19:27:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:52.218 /dev/nbd1' 00:07:52.218 19:27:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:52.218 19:27:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:52.218 19:27:45 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:52.218 19:27:45 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:52.218 19:27:45 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:52.218 19:27:45 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:52.218 19:27:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:52.218 19:27:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:52.218 19:27:45 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:52.218 19:27:45 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:52.218 19:27:45 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:52.218 19:27:45 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:52.218 256+0 records in 00:07:52.218 256+0 records out 00:07:52.218 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00687916 s, 152 MB/s 00:07:52.218 19:27:45 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:52.218 19:27:45 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:52.218 256+0 records in 00:07:52.218 256+0 records out 00:07:52.218 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0241552 s, 43.4 MB/s 00:07:52.218 19:27:45 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:52.218 19:27:45 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:52.218 256+0 records in 00:07:52.218 256+0 records out 00:07:52.218 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0258402 s, 40.6 MB/s 00:07:52.218 19:27:45 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:52.218 19:27:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:52.218 19:27:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:52.218 19:27:45 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:52.218 19:27:45 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:52.218 19:27:45 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:52.218 19:27:45 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:52.218 19:27:45 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:52.218 19:27:45 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:52.218 19:27:45 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:52.218 19:27:45 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:52.218 19:27:45 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:52.218 19:27:45 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:52.218 19:27:45 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:52.218 19:27:45 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:52.218 19:27:45 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:52.218 19:27:45 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:52.218 19:27:45 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:52.218 19:27:45 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:52.477 19:27:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:52.477 19:27:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:52.477 19:27:45 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:52.477 19:27:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:52.477 19:27:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:52.477 19:27:45 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:52.477 19:27:45 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:52.477 19:27:45 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:52.477 19:27:45 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:52.477 19:27:45 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:52.735 19:27:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:52.735 19:27:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:52.735 19:27:46 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:52.735 19:27:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:52.735 19:27:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:52.735 19:27:46 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:52.735 19:27:46 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:52.735 19:27:46 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:52.735 19:27:46 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:52.735 19:27:46 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:52.736 19:27:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:52.994 19:27:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:52.994 19:27:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:52.994 19:27:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:52.994 19:27:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:52.994 19:27:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:52.994 19:27:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:53.253 19:27:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:53.253 19:27:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:53.253 19:27:46 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:53.253 19:27:46 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:53.253 19:27:46 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:53.253 19:27:46 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:53.253 19:27:46 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:53.512 19:27:46 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:53.512 [2024-12-05 19:27:46.904044] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:53.512 [2024-12-05 19:27:46.944830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:53.512 [2024-12-05 19:27:46.944842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.789 [2024-12-05 19:27:46.998364] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:53.789 [2024-12-05 19:27:46.998461] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:53.789 [2024-12-05 19:27:46.998474] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:56.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:56.324 19:27:49 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58202 /var/tmp/spdk-nbd.sock 00:07:56.324 19:27:49 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58202 ']' 00:07:56.324 19:27:49 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:56.324 19:27:49 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:56.324 19:27:49 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:56.324 19:27:49 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:56.324 19:27:49 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:56.892 19:27:50 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:56.892 19:27:50 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:56.892 19:27:50 event.app_repeat -- event/event.sh@39 -- # killprocess 58202 00:07:56.892 19:27:50 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58202 ']' 00:07:56.892 19:27:50 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58202 00:07:56.892 19:27:50 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:07:56.892 19:27:50 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:56.892 19:27:50 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58202 00:07:56.892 killing process with pid 58202 00:07:56.892 19:27:50 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:56.892 19:27:50 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:56.892 19:27:50 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58202' 00:07:56.892 19:27:50 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58202 00:07:56.892 19:27:50 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58202 00:07:56.892 spdk_app_start is called in Round 0. 00:07:56.892 Shutdown signal received, stop current app iteration 00:07:56.892 Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 reinitialization... 00:07:56.892 spdk_app_start is called in Round 1. 00:07:56.892 Shutdown signal received, stop current app iteration 00:07:56.893 Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 reinitialization... 00:07:56.893 spdk_app_start is called in Round 2. 00:07:56.893 Shutdown signal received, stop current app iteration 00:07:56.893 Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 reinitialization... 00:07:56.893 spdk_app_start is called in Round 3. 00:07:56.893 Shutdown signal received, stop current app iteration 00:07:56.893 ************************************ 00:07:56.893 END TEST app_repeat 00:07:56.893 ************************************ 00:07:56.893 19:27:50 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:56.893 19:27:50 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:56.893 00:07:56.893 real 0m19.348s 00:07:56.893 user 0m44.215s 00:07:56.893 sys 0m2.962s 00:07:56.893 19:27:50 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:56.893 19:27:50 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:56.893 19:27:50 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:56.893 19:27:50 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:56.893 19:27:50 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:56.893 19:27:50 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:56.893 19:27:50 event -- common/autotest_common.sh@10 -- # set +x 00:07:56.893 ************************************ 00:07:56.893 START TEST cpu_locks 00:07:56.893 ************************************ 00:07:56.893 19:27:50 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:57.151 * Looking for test storage... 00:07:57.151 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:07:57.151 19:27:50 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:57.151 19:27:50 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:07:57.151 19:27:50 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:57.151 19:27:50 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:57.151 19:27:50 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:57.151 19:27:50 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:57.151 19:27:50 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:57.151 19:27:50 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:07:57.151 19:27:50 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:07:57.151 19:27:50 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:07:57.151 19:27:50 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:07:57.151 19:27:50 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:07:57.151 19:27:50 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:07:57.151 19:27:50 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:07:57.151 19:27:50 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:57.151 19:27:50 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:07:57.151 19:27:50 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:07:57.151 19:27:50 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:57.151 19:27:50 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:57.151 19:27:50 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:07:57.151 19:27:50 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:07:57.151 19:27:50 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:57.151 19:27:50 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:07:57.151 19:27:50 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:07:57.151 19:27:50 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:07:57.151 19:27:50 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:07:57.151 19:27:50 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:57.151 19:27:50 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:07:57.151 19:27:50 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:07:57.151 19:27:50 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:57.151 19:27:50 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:57.151 19:27:50 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:07:57.151 19:27:50 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:57.151 19:27:50 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:57.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.151 --rc genhtml_branch_coverage=1 00:07:57.151 --rc genhtml_function_coverage=1 00:07:57.151 --rc genhtml_legend=1 00:07:57.151 --rc geninfo_all_blocks=1 00:07:57.151 --rc geninfo_unexecuted_blocks=1 00:07:57.151 00:07:57.151 ' 00:07:57.151 19:27:50 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:57.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.151 --rc genhtml_branch_coverage=1 00:07:57.151 --rc genhtml_function_coverage=1 00:07:57.151 --rc genhtml_legend=1 00:07:57.151 --rc geninfo_all_blocks=1 00:07:57.151 --rc geninfo_unexecuted_blocks=1 00:07:57.151 00:07:57.151 ' 00:07:57.151 19:27:50 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:57.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.151 --rc genhtml_branch_coverage=1 00:07:57.151 --rc genhtml_function_coverage=1 00:07:57.151 --rc genhtml_legend=1 00:07:57.151 --rc geninfo_all_blocks=1 00:07:57.151 --rc geninfo_unexecuted_blocks=1 00:07:57.151 00:07:57.151 ' 00:07:57.151 19:27:50 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:57.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.151 --rc genhtml_branch_coverage=1 00:07:57.151 --rc genhtml_function_coverage=1 00:07:57.151 --rc genhtml_legend=1 00:07:57.151 --rc geninfo_all_blocks=1 00:07:57.151 --rc geninfo_unexecuted_blocks=1 00:07:57.151 00:07:57.151 ' 00:07:57.151 19:27:50 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:57.151 19:27:50 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:57.151 19:27:50 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:57.151 19:27:50 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:57.151 19:27:50 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:57.151 19:27:50 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:57.151 19:27:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:57.151 ************************************ 00:07:57.151 START TEST default_locks 00:07:57.151 ************************************ 00:07:57.151 19:27:50 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:07:57.151 19:27:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58642 00:07:57.151 19:27:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58642 00:07:57.151 19:27:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:57.151 19:27:50 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58642 ']' 00:07:57.151 19:27:50 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:57.151 19:27:50 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:57.151 19:27:50 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:57.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:57.151 19:27:50 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:57.151 19:27:50 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:57.151 [2024-12-05 19:27:50.574986] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:07:57.151 [2024-12-05 19:27:50.575355] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58642 ] 00:07:57.410 [2024-12-05 19:27:50.721286] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.410 [2024-12-05 19:27:50.778591] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.410 [2024-12-05 19:27:50.845319] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:58.347 19:27:51 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:58.347 19:27:51 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:07:58.347 19:27:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58642 00:07:58.347 19:27:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58642 00:07:58.347 19:27:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:58.606 19:27:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58642 00:07:58.606 19:27:52 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 58642 ']' 00:07:58.606 19:27:52 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 58642 00:07:58.606 19:27:52 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:07:58.606 19:27:52 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:58.606 19:27:52 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58642 00:07:58.606 killing process with pid 58642 00:07:58.606 19:27:52 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:58.606 19:27:52 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:58.606 19:27:52 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58642' 00:07:58.606 19:27:52 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 58642 00:07:58.607 19:27:52 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 58642 00:07:59.175 19:27:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58642 00:07:59.175 19:27:52 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:07:59.175 19:27:52 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58642 00:07:59.175 19:27:52 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:59.175 19:27:52 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:59.175 19:27:52 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:59.175 19:27:52 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:59.175 19:27:52 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 58642 00:07:59.175 19:27:52 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58642 ']' 00:07:59.175 19:27:52 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:59.175 19:27:52 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:59.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:59.175 19:27:52 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:59.175 ERROR: process (pid: 58642) is no longer running 00:07:59.175 19:27:52 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:59.175 19:27:52 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:59.175 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58642) - No such process 00:07:59.175 19:27:52 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:59.175 19:27:52 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:07:59.175 19:27:52 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:07:59.175 19:27:52 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:59.175 19:27:52 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:59.175 19:27:52 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:59.175 19:27:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:59.175 19:27:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:59.175 19:27:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:59.175 19:27:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:59.175 00:07:59.175 real 0m1.915s 00:07:59.175 user 0m2.092s 00:07:59.175 sys 0m0.569s 00:07:59.175 19:27:52 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:59.175 ************************************ 00:07:59.175 END TEST default_locks 00:07:59.175 ************************************ 00:07:59.175 19:27:52 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:59.175 19:27:52 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:59.175 19:27:52 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:59.175 19:27:52 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:59.175 19:27:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:59.175 ************************************ 00:07:59.175 START TEST default_locks_via_rpc 00:07:59.175 ************************************ 00:07:59.175 19:27:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:07:59.175 19:27:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58694 00:07:59.175 19:27:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:59.175 19:27:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58694 00:07:59.175 19:27:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58694 ']' 00:07:59.175 19:27:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:59.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:59.175 19:27:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:59.175 19:27:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:59.175 19:27:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:59.175 19:27:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:59.175 [2024-12-05 19:27:52.532644] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:07:59.175 [2024-12-05 19:27:52.532985] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58694 ] 00:07:59.435 [2024-12-05 19:27:52.673654] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.435 [2024-12-05 19:27:52.720994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.435 [2024-12-05 19:27:52.787789] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:59.695 19:27:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:59.695 19:27:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:59.695 19:27:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:59.695 19:27:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.695 19:27:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:59.695 19:27:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.695 19:27:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:59.695 19:27:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:59.695 19:27:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:59.695 19:27:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:59.695 19:27:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:59.695 19:27:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.695 19:27:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:59.695 19:27:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.695 19:27:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58694 00:07:59.695 19:27:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58694 00:07:59.695 19:27:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:00.263 19:27:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58694 00:08:00.263 19:27:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 58694 ']' 00:08:00.263 19:27:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 58694 00:08:00.263 19:27:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:08:00.263 19:27:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:00.263 19:27:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58694 00:08:00.263 killing process with pid 58694 00:08:00.263 19:27:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:00.263 19:27:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:00.263 19:27:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58694' 00:08:00.263 19:27:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 58694 00:08:00.263 19:27:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 58694 00:08:00.521 ************************************ 00:08:00.521 END TEST default_locks_via_rpc 00:08:00.521 ************************************ 00:08:00.521 00:08:00.521 real 0m1.348s 00:08:00.521 user 0m1.301s 00:08:00.521 sys 0m0.536s 00:08:00.521 19:27:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:00.521 19:27:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:00.521 19:27:53 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:08:00.521 19:27:53 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:00.521 19:27:53 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:00.521 19:27:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:00.521 ************************************ 00:08:00.521 START TEST non_locking_app_on_locked_coremask 00:08:00.521 ************************************ 00:08:00.521 19:27:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:08:00.521 19:27:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58737 00:08:00.521 19:27:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58737 /var/tmp/spdk.sock 00:08:00.521 19:27:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:00.521 19:27:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58737 ']' 00:08:00.521 19:27:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:00.521 19:27:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:00.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:00.521 19:27:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:00.521 19:27:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:00.521 19:27:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:00.521 [2024-12-05 19:27:53.941752] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:08:00.521 [2024-12-05 19:27:53.942195] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58737 ] 00:08:00.780 [2024-12-05 19:27:54.090279] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.780 [2024-12-05 19:27:54.146651] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.780 [2024-12-05 19:27:54.212068] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:01.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:01.037 19:27:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:01.037 19:27:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:08:01.037 19:27:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58746 00:08:01.037 19:27:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:08:01.037 19:27:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 58746 /var/tmp/spdk2.sock 00:08:01.037 19:27:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58746 ']' 00:08:01.037 19:27:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:01.037 19:27:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:01.037 19:27:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:01.037 19:27:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:01.037 19:27:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:01.295 [2024-12-05 19:27:54.483288] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:08:01.295 [2024-12-05 19:27:54.483611] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58746 ] 00:08:01.295 [2024-12-05 19:27:54.642875] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:01.295 [2024-12-05 19:27:54.642976] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.552 [2024-12-05 19:27:54.768189] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.552 [2024-12-05 19:27:54.900660] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:02.117 19:27:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:02.117 19:27:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:08:02.117 19:27:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58737 00:08:02.117 19:27:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58737 00:08:02.117 19:27:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:03.050 19:27:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58737 00:08:03.050 19:27:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58737 ']' 00:08:03.050 19:27:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58737 00:08:03.050 19:27:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:08:03.050 19:27:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:03.050 19:27:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58737 00:08:03.050 19:27:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:03.050 19:27:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:03.050 killing process with pid 58737 00:08:03.050 19:27:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58737' 00:08:03.050 19:27:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58737 00:08:03.050 19:27:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58737 00:08:03.983 19:27:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 58746 00:08:03.983 19:27:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58746 ']' 00:08:03.983 19:27:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58746 00:08:03.983 19:27:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:08:03.983 19:27:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:03.983 19:27:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58746 00:08:03.983 killing process with pid 58746 00:08:03.983 19:27:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:03.983 19:27:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:03.983 19:27:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58746' 00:08:03.983 19:27:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58746 00:08:03.984 19:27:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58746 00:08:04.242 ************************************ 00:08:04.242 END TEST non_locking_app_on_locked_coremask 00:08:04.242 ************************************ 00:08:04.242 00:08:04.242 real 0m3.650s 00:08:04.242 user 0m4.050s 00:08:04.242 sys 0m1.073s 00:08:04.242 19:27:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:04.242 19:27:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:04.242 19:27:57 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:08:04.242 19:27:57 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:04.242 19:27:57 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:04.242 19:27:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:04.242 ************************************ 00:08:04.242 START TEST locking_app_on_unlocked_coremask 00:08:04.242 ************************************ 00:08:04.242 19:27:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:08:04.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:04.242 19:27:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=58813 00:08:04.242 19:27:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 58813 /var/tmp/spdk.sock 00:08:04.242 19:27:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:08:04.242 19:27:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58813 ']' 00:08:04.242 19:27:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:04.242 19:27:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:04.242 19:27:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:04.242 19:27:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:04.242 19:27:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:04.242 [2024-12-05 19:27:57.641148] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:08:04.242 [2024-12-05 19:27:57.641252] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58813 ] 00:08:04.501 [2024-12-05 19:27:57.786454] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:04.501 [2024-12-05 19:27:57.786682] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.501 [2024-12-05 19:27:57.846251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.501 [2024-12-05 19:27:57.916155] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:04.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:04.760 19:27:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:04.760 19:27:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:08:04.760 19:27:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=58821 00:08:04.760 19:27:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 58821 /var/tmp/spdk2.sock 00:08:04.760 19:27:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:08:04.760 19:27:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58821 ']' 00:08:04.760 19:27:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:04.760 19:27:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:04.760 19:27:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:04.760 19:27:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:04.760 19:27:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:04.760 [2024-12-05 19:27:58.179132] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:08:04.760 [2024-12-05 19:27:58.180074] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58821 ] 00:08:05.019 [2024-12-05 19:27:58.342487] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.019 [2024-12-05 19:27:58.454245] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.278 [2024-12-05 19:27:58.587814] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:05.846 19:27:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:05.846 19:27:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:08:05.846 19:27:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 58821 00:08:05.846 19:27:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58821 00:08:05.846 19:27:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:06.783 19:27:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 58813 00:08:06.783 19:27:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58813 ']' 00:08:06.783 19:27:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 58813 00:08:06.783 19:27:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:08:06.783 19:27:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:06.783 19:27:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58813 00:08:06.783 killing process with pid 58813 00:08:06.783 19:27:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:06.783 19:27:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:06.783 19:27:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58813' 00:08:06.783 19:27:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 58813 00:08:06.783 19:27:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 58813 00:08:07.351 19:28:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 58821 00:08:07.351 19:28:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58821 ']' 00:08:07.351 19:28:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 58821 00:08:07.351 19:28:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:08:07.351 19:28:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:07.351 19:28:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58821 00:08:07.351 killing process with pid 58821 00:08:07.351 19:28:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:07.351 19:28:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:07.351 19:28:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58821' 00:08:07.351 19:28:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 58821 00:08:07.351 19:28:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 58821 00:08:07.918 ************************************ 00:08:07.918 END TEST locking_app_on_unlocked_coremask 00:08:07.918 ************************************ 00:08:07.918 00:08:07.918 real 0m3.504s 00:08:07.918 user 0m3.815s 00:08:07.918 sys 0m1.093s 00:08:07.918 19:28:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:07.918 19:28:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:07.918 19:28:01 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:08:07.918 19:28:01 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:07.918 19:28:01 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:07.918 19:28:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:07.918 ************************************ 00:08:07.918 START TEST locking_app_on_locked_coremask 00:08:07.918 ************************************ 00:08:07.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:07.918 19:28:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:08:07.918 19:28:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=58888 00:08:07.918 19:28:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:07.918 19:28:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 58888 /var/tmp/spdk.sock 00:08:07.918 19:28:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58888 ']' 00:08:07.918 19:28:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:07.918 19:28:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:07.918 19:28:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:07.918 19:28:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:07.918 19:28:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:07.918 [2024-12-05 19:28:01.200994] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:08:07.918 [2024-12-05 19:28:01.201509] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58888 ] 00:08:07.918 [2024-12-05 19:28:01.348644] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.225 [2024-12-05 19:28:01.410483] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.225 [2024-12-05 19:28:01.479700] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:08.483 19:28:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:08.483 19:28:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:08:08.483 19:28:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=58897 00:08:08.483 19:28:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:08:08.483 19:28:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 58897 /var/tmp/spdk2.sock 00:08:08.483 19:28:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:08:08.483 19:28:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58897 /var/tmp/spdk2.sock 00:08:08.483 19:28:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:08:08.483 19:28:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:08.484 19:28:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:08:08.484 19:28:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:08.484 19:28:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 58897 /var/tmp/spdk2.sock 00:08:08.484 19:28:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58897 ']' 00:08:08.484 19:28:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:08.484 19:28:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:08.484 19:28:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:08.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:08.484 19:28:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:08.484 19:28:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:08.484 [2024-12-05 19:28:01.748202] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:08:08.484 [2024-12-05 19:28:01.748504] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58897 ] 00:08:08.484 [2024-12-05 19:28:01.908511] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 58888 has claimed it. 00:08:08.484 [2024-12-05 19:28:01.908643] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:08:09.052 ERROR: process (pid: 58897) is no longer running 00:08:09.052 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58897) - No such process 00:08:09.052 19:28:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:09.052 19:28:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:08:09.052 19:28:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:08:09.052 19:28:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:09.052 19:28:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:09.052 19:28:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:09.052 19:28:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 58888 00:08:09.052 19:28:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58888 00:08:09.052 19:28:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:09.621 19:28:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 58888 00:08:09.621 19:28:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58888 ']' 00:08:09.621 19:28:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58888 00:08:09.621 19:28:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:08:09.621 19:28:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:09.621 19:28:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58888 00:08:09.621 killing process with pid 58888 00:08:09.621 19:28:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:09.621 19:28:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:09.621 19:28:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58888' 00:08:09.621 19:28:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58888 00:08:09.621 19:28:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58888 00:08:10.190 00:08:10.190 real 0m2.199s 00:08:10.190 user 0m2.485s 00:08:10.190 sys 0m0.601s 00:08:10.190 19:28:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:10.190 ************************************ 00:08:10.190 END TEST locking_app_on_locked_coremask 00:08:10.190 ************************************ 00:08:10.190 19:28:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:10.190 19:28:03 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:08:10.190 19:28:03 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:10.190 19:28:03 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:10.190 19:28:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:10.190 ************************************ 00:08:10.190 START TEST locking_overlapped_coremask 00:08:10.190 ************************************ 00:08:10.190 19:28:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:08:10.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:10.190 19:28:03 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=58942 00:08:10.190 19:28:03 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 58942 /var/tmp/spdk.sock 00:08:10.190 19:28:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 58942 ']' 00:08:10.190 19:28:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:10.190 19:28:03 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:08:10.190 19:28:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:10.190 19:28:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:10.190 19:28:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:10.190 19:28:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:10.190 [2024-12-05 19:28:03.452425] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:08:10.190 [2024-12-05 19:28:03.452565] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58942 ] 00:08:10.190 [2024-12-05 19:28:03.598997] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:10.450 [2024-12-05 19:28:03.652963] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:10.450 [2024-12-05 19:28:03.653140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.450 [2024-12-05 19:28:03.653167] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:10.450 [2024-12-05 19:28:03.719590] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:11.019 19:28:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:11.019 19:28:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:08:11.019 19:28:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=58960 00:08:11.019 19:28:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:08:11.019 19:28:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 58960 /var/tmp/spdk2.sock 00:08:11.019 19:28:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:08:11.019 19:28:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58960 /var/tmp/spdk2.sock 00:08:11.019 19:28:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:08:11.019 19:28:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:11.019 19:28:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:08:11.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:11.019 19:28:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:11.019 19:28:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 58960 /var/tmp/spdk2.sock 00:08:11.019 19:28:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 58960 ']' 00:08:11.019 19:28:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:11.019 19:28:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:11.019 19:28:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:11.019 19:28:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:11.019 19:28:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:11.278 [2024-12-05 19:28:04.483666] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:08:11.279 [2024-12-05 19:28:04.483766] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58960 ] 00:08:11.279 [2024-12-05 19:28:04.651098] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 58942 has claimed it. 00:08:11.279 [2024-12-05 19:28:04.654253] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:08:11.860 ERROR: process (pid: 58960) is no longer running 00:08:11.860 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58960) - No such process 00:08:11.860 19:28:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:11.860 19:28:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:08:11.860 19:28:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:08:11.860 19:28:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:11.860 19:28:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:11.860 19:28:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:11.860 19:28:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:08:11.860 19:28:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:11.860 19:28:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:11.860 19:28:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:11.860 19:28:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 58942 00:08:11.860 19:28:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 58942 ']' 00:08:11.860 19:28:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 58942 00:08:11.860 19:28:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:08:11.860 19:28:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:11.860 19:28:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58942 00:08:11.860 19:28:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:11.860 19:28:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:11.860 19:28:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58942' 00:08:11.860 killing process with pid 58942 00:08:11.860 19:28:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 58942 00:08:11.860 19:28:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 58942 00:08:12.428 ************************************ 00:08:12.428 END TEST locking_overlapped_coremask 00:08:12.428 ************************************ 00:08:12.428 00:08:12.428 real 0m2.256s 00:08:12.428 user 0m6.439s 00:08:12.428 sys 0m0.434s 00:08:12.428 19:28:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:12.428 19:28:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:12.428 19:28:05 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:08:12.428 19:28:05 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:12.428 19:28:05 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:12.428 19:28:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:12.428 ************************************ 00:08:12.428 START TEST locking_overlapped_coremask_via_rpc 00:08:12.428 ************************************ 00:08:12.428 19:28:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:08:12.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:12.428 19:28:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59006 00:08:12.428 19:28:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59006 /var/tmp/spdk.sock 00:08:12.428 19:28:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:08:12.428 19:28:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59006 ']' 00:08:12.428 19:28:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:12.428 19:28:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:12.428 19:28:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:12.428 19:28:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:12.428 19:28:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:12.428 [2024-12-05 19:28:05.756379] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:08:12.429 [2024-12-05 19:28:05.756731] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59006 ] 00:08:12.688 [2024-12-05 19:28:05.897731] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:12.688 [2024-12-05 19:28:05.897974] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:12.688 [2024-12-05 19:28:05.953410] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:12.688 [2024-12-05 19:28:05.953562] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:12.688 [2024-12-05 19:28:05.953564] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.688 [2024-12-05 19:28:06.021590] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:13.625 19:28:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:13.625 19:28:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:08:13.625 19:28:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:08:13.625 19:28:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59024 00:08:13.625 19:28:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59024 /var/tmp/spdk2.sock 00:08:13.625 19:28:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59024 ']' 00:08:13.625 19:28:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:13.625 19:28:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:13.625 19:28:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:13.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:13.625 19:28:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:13.625 19:28:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:13.626 [2024-12-05 19:28:06.799864] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:08:13.626 [2024-12-05 19:28:06.799975] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59024 ] 00:08:13.626 [2024-12-05 19:28:06.966263] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:13.626 [2024-12-05 19:28:06.966354] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:13.885 [2024-12-05 19:28:07.092852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:13.885 [2024-12-05 19:28:07.096281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:13.885 [2024-12-05 19:28:07.096285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:13.885 [2024-12-05 19:28:07.231676] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:14.454 19:28:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:14.454 19:28:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:08:14.454 19:28:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:08:14.454 19:28:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.454 19:28:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:14.454 19:28:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.454 19:28:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:14.454 19:28:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:08:14.454 19:28:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:14.454 19:28:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:14.454 19:28:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:14.454 19:28:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:14.454 19:28:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:14.454 19:28:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:14.454 19:28:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.454 19:28:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:14.454 [2024-12-05 19:28:07.821311] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59006 has claimed it. 00:08:14.454 request: 00:08:14.454 { 00:08:14.454 "method": "framework_enable_cpumask_locks", 00:08:14.454 "req_id": 1 00:08:14.454 } 00:08:14.454 Got JSON-RPC error response 00:08:14.454 response: 00:08:14.454 { 00:08:14.454 "code": -32603, 00:08:14.454 "message": "Failed to claim CPU core: 2" 00:08:14.454 } 00:08:14.454 19:28:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:14.454 19:28:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:08:14.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:14.454 19:28:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:14.454 19:28:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:14.454 19:28:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:14.454 19:28:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59006 /var/tmp/spdk.sock 00:08:14.454 19:28:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59006 ']' 00:08:14.454 19:28:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:14.454 19:28:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:14.454 19:28:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:14.454 19:28:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:14.454 19:28:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:14.715 19:28:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:14.715 19:28:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:08:14.715 19:28:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59024 /var/tmp/spdk2.sock 00:08:14.715 19:28:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59024 ']' 00:08:14.715 19:28:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:14.715 19:28:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:14.715 19:28:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:14.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:14.715 19:28:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:14.715 19:28:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:15.304 ************************************ 00:08:15.304 END TEST locking_overlapped_coremask_via_rpc 00:08:15.304 ************************************ 00:08:15.304 19:28:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:15.304 19:28:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:08:15.304 19:28:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:08:15.304 19:28:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:15.304 19:28:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:15.304 19:28:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:15.304 00:08:15.304 real 0m2.735s 00:08:15.304 user 0m1.484s 00:08:15.304 sys 0m0.183s 00:08:15.304 19:28:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:15.304 19:28:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:15.304 19:28:08 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:08:15.304 19:28:08 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59006 ]] 00:08:15.304 19:28:08 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59006 00:08:15.304 19:28:08 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59006 ']' 00:08:15.304 19:28:08 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59006 00:08:15.304 19:28:08 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:08:15.304 19:28:08 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:15.304 19:28:08 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59006 00:08:15.304 killing process with pid 59006 00:08:15.304 19:28:08 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:15.304 19:28:08 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:15.304 19:28:08 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59006' 00:08:15.304 19:28:08 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59006 00:08:15.304 19:28:08 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59006 00:08:15.564 19:28:08 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59024 ]] 00:08:15.564 19:28:08 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59024 00:08:15.564 19:28:08 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59024 ']' 00:08:15.564 19:28:08 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59024 00:08:15.564 19:28:08 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:08:15.564 19:28:08 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:15.564 19:28:08 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59024 00:08:15.564 19:28:08 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:08:15.564 killing process with pid 59024 00:08:15.564 19:28:08 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:08:15.564 19:28:08 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59024' 00:08:15.564 19:28:08 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59024 00:08:15.564 19:28:08 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59024 00:08:16.132 19:28:09 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:16.132 19:28:09 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:08:16.132 19:28:09 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59006 ]] 00:08:16.132 19:28:09 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59006 00:08:16.132 19:28:09 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59006 ']' 00:08:16.132 19:28:09 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59006 00:08:16.132 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59006) - No such process 00:08:16.132 Process with pid 59006 is not found 00:08:16.132 19:28:09 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59006 is not found' 00:08:16.132 19:28:09 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59024 ]] 00:08:16.132 19:28:09 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59024 00:08:16.132 19:28:09 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59024 ']' 00:08:16.132 19:28:09 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59024 00:08:16.132 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59024) - No such process 00:08:16.132 Process with pid 59024 is not found 00:08:16.132 19:28:09 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59024 is not found' 00:08:16.132 19:28:09 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:16.132 ************************************ 00:08:16.132 END TEST cpu_locks 00:08:16.132 ************************************ 00:08:16.132 00:08:16.132 real 0m19.024s 00:08:16.132 user 0m34.934s 00:08:16.132 sys 0m5.408s 00:08:16.132 19:28:09 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:16.132 19:28:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:16.132 00:08:16.132 real 0m47.533s 00:08:16.132 user 1m34.908s 00:08:16.133 sys 0m9.221s 00:08:16.133 19:28:09 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:16.133 19:28:09 event -- common/autotest_common.sh@10 -- # set +x 00:08:16.133 ************************************ 00:08:16.133 END TEST event 00:08:16.133 ************************************ 00:08:16.133 19:28:09 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:08:16.133 19:28:09 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:16.133 19:28:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:16.133 19:28:09 -- common/autotest_common.sh@10 -- # set +x 00:08:16.133 ************************************ 00:08:16.133 START TEST thread 00:08:16.133 ************************************ 00:08:16.133 19:28:09 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:08:16.133 * Looking for test storage... 00:08:16.133 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:08:16.133 19:28:09 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:16.133 19:28:09 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:08:16.133 19:28:09 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:16.133 19:28:09 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:16.133 19:28:09 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:16.133 19:28:09 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:16.133 19:28:09 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:16.133 19:28:09 thread -- scripts/common.sh@336 -- # IFS=.-: 00:08:16.133 19:28:09 thread -- scripts/common.sh@336 -- # read -ra ver1 00:08:16.133 19:28:09 thread -- scripts/common.sh@337 -- # IFS=.-: 00:08:16.133 19:28:09 thread -- scripts/common.sh@337 -- # read -ra ver2 00:08:16.133 19:28:09 thread -- scripts/common.sh@338 -- # local 'op=<' 00:08:16.133 19:28:09 thread -- scripts/common.sh@340 -- # ver1_l=2 00:08:16.133 19:28:09 thread -- scripts/common.sh@341 -- # ver2_l=1 00:08:16.133 19:28:09 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:16.133 19:28:09 thread -- scripts/common.sh@344 -- # case "$op" in 00:08:16.133 19:28:09 thread -- scripts/common.sh@345 -- # : 1 00:08:16.133 19:28:09 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:16.133 19:28:09 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:16.392 19:28:09 thread -- scripts/common.sh@365 -- # decimal 1 00:08:16.392 19:28:09 thread -- scripts/common.sh@353 -- # local d=1 00:08:16.392 19:28:09 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:16.392 19:28:09 thread -- scripts/common.sh@355 -- # echo 1 00:08:16.392 19:28:09 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:08:16.392 19:28:09 thread -- scripts/common.sh@366 -- # decimal 2 00:08:16.392 19:28:09 thread -- scripts/common.sh@353 -- # local d=2 00:08:16.392 19:28:09 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:16.392 19:28:09 thread -- scripts/common.sh@355 -- # echo 2 00:08:16.392 19:28:09 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:08:16.392 19:28:09 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:16.392 19:28:09 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:16.392 19:28:09 thread -- scripts/common.sh@368 -- # return 0 00:08:16.392 19:28:09 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:16.392 19:28:09 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:16.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:16.392 --rc genhtml_branch_coverage=1 00:08:16.392 --rc genhtml_function_coverage=1 00:08:16.392 --rc genhtml_legend=1 00:08:16.392 --rc geninfo_all_blocks=1 00:08:16.392 --rc geninfo_unexecuted_blocks=1 00:08:16.392 00:08:16.392 ' 00:08:16.392 19:28:09 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:16.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:16.392 --rc genhtml_branch_coverage=1 00:08:16.392 --rc genhtml_function_coverage=1 00:08:16.392 --rc genhtml_legend=1 00:08:16.392 --rc geninfo_all_blocks=1 00:08:16.392 --rc geninfo_unexecuted_blocks=1 00:08:16.392 00:08:16.392 ' 00:08:16.392 19:28:09 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:16.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:16.392 --rc genhtml_branch_coverage=1 00:08:16.392 --rc genhtml_function_coverage=1 00:08:16.392 --rc genhtml_legend=1 00:08:16.392 --rc geninfo_all_blocks=1 00:08:16.392 --rc geninfo_unexecuted_blocks=1 00:08:16.392 00:08:16.392 ' 00:08:16.392 19:28:09 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:16.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:16.392 --rc genhtml_branch_coverage=1 00:08:16.392 --rc genhtml_function_coverage=1 00:08:16.392 --rc genhtml_legend=1 00:08:16.392 --rc geninfo_all_blocks=1 00:08:16.392 --rc geninfo_unexecuted_blocks=1 00:08:16.392 00:08:16.392 ' 00:08:16.392 19:28:09 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:16.392 19:28:09 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:08:16.392 19:28:09 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:16.392 19:28:09 thread -- common/autotest_common.sh@10 -- # set +x 00:08:16.392 ************************************ 00:08:16.392 START TEST thread_poller_perf 00:08:16.392 ************************************ 00:08:16.392 19:28:09 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:16.392 [2024-12-05 19:28:09.614605] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:08:16.392 [2024-12-05 19:28:09.614895] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59154 ] 00:08:16.392 [2024-12-05 19:28:09.756895] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.392 [2024-12-05 19:28:09.804409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.392 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:08:17.770 [2024-12-05T19:28:11.207Z] ====================================== 00:08:17.770 [2024-12-05T19:28:11.207Z] busy:2206032534 (cyc) 00:08:17.770 [2024-12-05T19:28:11.207Z] total_run_count: 362000 00:08:17.770 [2024-12-05T19:28:11.207Z] tsc_hz: 2200000000 (cyc) 00:08:17.770 [2024-12-05T19:28:11.207Z] ====================================== 00:08:17.770 [2024-12-05T19:28:11.207Z] poller_cost: 6094 (cyc), 2770 (nsec) 00:08:17.770 00:08:17.770 real 0m1.266s 00:08:17.770 user 0m1.118s 00:08:17.770 sys 0m0.041s 00:08:17.770 19:28:10 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:17.770 19:28:10 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:17.770 ************************************ 00:08:17.770 END TEST thread_poller_perf 00:08:17.770 ************************************ 00:08:17.770 19:28:10 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:17.770 19:28:10 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:08:17.770 19:28:10 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:17.770 19:28:10 thread -- common/autotest_common.sh@10 -- # set +x 00:08:17.770 ************************************ 00:08:17.770 START TEST thread_poller_perf 00:08:17.770 ************************************ 00:08:17.770 19:28:10 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:17.770 [2024-12-05 19:28:10.937465] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:08:17.770 [2024-12-05 19:28:10.937754] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59190 ] 00:08:17.770 [2024-12-05 19:28:11.079923] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.770 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:08:17.770 [2024-12-05 19:28:11.135536] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.149 [2024-12-05T19:28:12.586Z] ====================================== 00:08:19.149 [2024-12-05T19:28:12.586Z] busy:2201979356 (cyc) 00:08:19.149 [2024-12-05T19:28:12.586Z] total_run_count: 4623000 00:08:19.149 [2024-12-05T19:28:12.586Z] tsc_hz: 2200000000 (cyc) 00:08:19.149 [2024-12-05T19:28:12.586Z] ====================================== 00:08:19.149 [2024-12-05T19:28:12.586Z] poller_cost: 476 (cyc), 216 (nsec) 00:08:19.149 00:08:19.149 real 0m1.260s 00:08:19.149 user 0m1.116s 00:08:19.149 sys 0m0.038s 00:08:19.149 19:28:12 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:19.149 ************************************ 00:08:19.149 END TEST thread_poller_perf 00:08:19.149 ************************************ 00:08:19.149 19:28:12 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:19.149 19:28:12 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:08:19.149 ************************************ 00:08:19.149 END TEST thread 00:08:19.149 ************************************ 00:08:19.149 00:08:19.149 real 0m2.827s 00:08:19.149 user 0m2.387s 00:08:19.149 sys 0m0.224s 00:08:19.149 19:28:12 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:19.149 19:28:12 thread -- common/autotest_common.sh@10 -- # set +x 00:08:19.149 19:28:12 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:08:19.149 19:28:12 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:08:19.149 19:28:12 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:19.149 19:28:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:19.149 19:28:12 -- common/autotest_common.sh@10 -- # set +x 00:08:19.149 ************************************ 00:08:19.149 START TEST app_cmdline 00:08:19.149 ************************************ 00:08:19.149 19:28:12 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:08:19.149 * Looking for test storage... 00:08:19.149 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:19.149 19:28:12 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:19.149 19:28:12 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:08:19.149 19:28:12 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:19.149 19:28:12 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:19.149 19:28:12 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:19.149 19:28:12 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:19.149 19:28:12 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:19.149 19:28:12 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:08:19.149 19:28:12 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:08:19.149 19:28:12 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:08:19.149 19:28:12 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:08:19.149 19:28:12 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:08:19.149 19:28:12 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:08:19.149 19:28:12 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:08:19.149 19:28:12 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:19.149 19:28:12 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:08:19.149 19:28:12 app_cmdline -- scripts/common.sh@345 -- # : 1 00:08:19.149 19:28:12 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:19.149 19:28:12 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:19.149 19:28:12 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:08:19.149 19:28:12 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:08:19.149 19:28:12 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:19.149 19:28:12 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:08:19.149 19:28:12 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:08:19.149 19:28:12 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:08:19.149 19:28:12 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:08:19.149 19:28:12 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:19.149 19:28:12 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:08:19.149 19:28:12 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:08:19.149 19:28:12 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:19.149 19:28:12 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:19.149 19:28:12 app_cmdline -- scripts/common.sh@368 -- # return 0 00:08:19.149 19:28:12 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:19.149 19:28:12 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:19.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.149 --rc genhtml_branch_coverage=1 00:08:19.149 --rc genhtml_function_coverage=1 00:08:19.149 --rc genhtml_legend=1 00:08:19.149 --rc geninfo_all_blocks=1 00:08:19.149 --rc geninfo_unexecuted_blocks=1 00:08:19.149 00:08:19.149 ' 00:08:19.149 19:28:12 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:19.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.149 --rc genhtml_branch_coverage=1 00:08:19.149 --rc genhtml_function_coverage=1 00:08:19.149 --rc genhtml_legend=1 00:08:19.149 --rc geninfo_all_blocks=1 00:08:19.149 --rc geninfo_unexecuted_blocks=1 00:08:19.149 00:08:19.149 ' 00:08:19.149 19:28:12 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:19.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.149 --rc genhtml_branch_coverage=1 00:08:19.149 --rc genhtml_function_coverage=1 00:08:19.149 --rc genhtml_legend=1 00:08:19.149 --rc geninfo_all_blocks=1 00:08:19.149 --rc geninfo_unexecuted_blocks=1 00:08:19.149 00:08:19.149 ' 00:08:19.149 19:28:12 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:19.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.149 --rc genhtml_branch_coverage=1 00:08:19.149 --rc genhtml_function_coverage=1 00:08:19.149 --rc genhtml_legend=1 00:08:19.149 --rc geninfo_all_blocks=1 00:08:19.149 --rc geninfo_unexecuted_blocks=1 00:08:19.149 00:08:19.149 ' 00:08:19.149 19:28:12 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:08:19.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:19.149 19:28:12 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59271 00:08:19.149 19:28:12 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59271 00:08:19.149 19:28:12 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 59271 ']' 00:08:19.149 19:28:12 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:19.149 19:28:12 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:08:19.149 19:28:12 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:19.149 19:28:12 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:19.149 19:28:12 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:19.149 19:28:12 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:19.149 [2024-12-05 19:28:12.532905] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:08:19.150 [2024-12-05 19:28:12.533254] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59271 ] 00:08:19.409 [2024-12-05 19:28:12.680942] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.409 [2024-12-05 19:28:12.735099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.409 [2024-12-05 19:28:12.804528] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:19.667 19:28:12 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:19.667 19:28:12 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:08:19.667 19:28:12 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:08:19.926 { 00:08:19.926 "version": "SPDK v25.01-pre git sha1 2c140f58f", 00:08:19.926 "fields": { 00:08:19.926 "major": 25, 00:08:19.926 "minor": 1, 00:08:19.926 "patch": 0, 00:08:19.926 "suffix": "-pre", 00:08:19.926 "commit": "2c140f58f" 00:08:19.926 } 00:08:19.926 } 00:08:19.926 19:28:13 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:08:19.926 19:28:13 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:08:19.926 19:28:13 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:08:19.926 19:28:13 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:08:19.926 19:28:13 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:08:19.926 19:28:13 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.926 19:28:13 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:19.926 19:28:13 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:08:19.926 19:28:13 app_cmdline -- app/cmdline.sh@26 -- # sort 00:08:19.926 19:28:13 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.926 19:28:13 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:08:19.926 19:28:13 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:08:19.926 19:28:13 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:19.926 19:28:13 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:08:19.926 19:28:13 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:19.926 19:28:13 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:20.183 19:28:13 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:20.183 19:28:13 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:20.183 19:28:13 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:20.183 19:28:13 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:20.183 19:28:13 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:20.183 19:28:13 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:20.183 19:28:13 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:20.183 19:28:13 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:20.442 request: 00:08:20.442 { 00:08:20.442 "method": "env_dpdk_get_mem_stats", 00:08:20.442 "req_id": 1 00:08:20.442 } 00:08:20.442 Got JSON-RPC error response 00:08:20.442 response: 00:08:20.442 { 00:08:20.442 "code": -32601, 00:08:20.442 "message": "Method not found" 00:08:20.442 } 00:08:20.442 19:28:13 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:08:20.442 19:28:13 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:20.442 19:28:13 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:20.442 19:28:13 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:20.442 19:28:13 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59271 00:08:20.442 19:28:13 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 59271 ']' 00:08:20.442 19:28:13 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 59271 00:08:20.442 19:28:13 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:08:20.442 19:28:13 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:20.442 19:28:13 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59271 00:08:20.442 killing process with pid 59271 00:08:20.442 19:28:13 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:20.442 19:28:13 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:20.442 19:28:13 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59271' 00:08:20.442 19:28:13 app_cmdline -- common/autotest_common.sh@973 -- # kill 59271 00:08:20.442 19:28:13 app_cmdline -- common/autotest_common.sh@978 -- # wait 59271 00:08:20.700 00:08:20.700 real 0m1.794s 00:08:20.700 user 0m2.220s 00:08:20.700 sys 0m0.466s 00:08:20.700 19:28:14 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:20.700 ************************************ 00:08:20.700 END TEST app_cmdline 00:08:20.700 19:28:14 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:20.700 ************************************ 00:08:20.700 19:28:14 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:20.700 19:28:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:20.700 19:28:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:20.700 19:28:14 -- common/autotest_common.sh@10 -- # set +x 00:08:20.700 ************************************ 00:08:20.700 START TEST version 00:08:20.700 ************************************ 00:08:20.700 19:28:14 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:21.003 * Looking for test storage... 00:08:21.003 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:21.003 19:28:14 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:21.003 19:28:14 version -- common/autotest_common.sh@1711 -- # lcov --version 00:08:21.003 19:28:14 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:21.003 19:28:14 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:21.003 19:28:14 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:21.003 19:28:14 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:21.003 19:28:14 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:21.003 19:28:14 version -- scripts/common.sh@336 -- # IFS=.-: 00:08:21.003 19:28:14 version -- scripts/common.sh@336 -- # read -ra ver1 00:08:21.003 19:28:14 version -- scripts/common.sh@337 -- # IFS=.-: 00:08:21.003 19:28:14 version -- scripts/common.sh@337 -- # read -ra ver2 00:08:21.003 19:28:14 version -- scripts/common.sh@338 -- # local 'op=<' 00:08:21.003 19:28:14 version -- scripts/common.sh@340 -- # ver1_l=2 00:08:21.003 19:28:14 version -- scripts/common.sh@341 -- # ver2_l=1 00:08:21.003 19:28:14 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:21.003 19:28:14 version -- scripts/common.sh@344 -- # case "$op" in 00:08:21.003 19:28:14 version -- scripts/common.sh@345 -- # : 1 00:08:21.003 19:28:14 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:21.003 19:28:14 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:21.003 19:28:14 version -- scripts/common.sh@365 -- # decimal 1 00:08:21.003 19:28:14 version -- scripts/common.sh@353 -- # local d=1 00:08:21.003 19:28:14 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:21.003 19:28:14 version -- scripts/common.sh@355 -- # echo 1 00:08:21.003 19:28:14 version -- scripts/common.sh@365 -- # ver1[v]=1 00:08:21.003 19:28:14 version -- scripts/common.sh@366 -- # decimal 2 00:08:21.003 19:28:14 version -- scripts/common.sh@353 -- # local d=2 00:08:21.003 19:28:14 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:21.003 19:28:14 version -- scripts/common.sh@355 -- # echo 2 00:08:21.003 19:28:14 version -- scripts/common.sh@366 -- # ver2[v]=2 00:08:21.003 19:28:14 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:21.003 19:28:14 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:21.003 19:28:14 version -- scripts/common.sh@368 -- # return 0 00:08:21.003 19:28:14 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:21.003 19:28:14 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:21.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.003 --rc genhtml_branch_coverage=1 00:08:21.003 --rc genhtml_function_coverage=1 00:08:21.003 --rc genhtml_legend=1 00:08:21.003 --rc geninfo_all_blocks=1 00:08:21.003 --rc geninfo_unexecuted_blocks=1 00:08:21.003 00:08:21.003 ' 00:08:21.004 19:28:14 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:21.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.004 --rc genhtml_branch_coverage=1 00:08:21.004 --rc genhtml_function_coverage=1 00:08:21.004 --rc genhtml_legend=1 00:08:21.004 --rc geninfo_all_blocks=1 00:08:21.004 --rc geninfo_unexecuted_blocks=1 00:08:21.004 00:08:21.004 ' 00:08:21.004 19:28:14 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:21.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.004 --rc genhtml_branch_coverage=1 00:08:21.004 --rc genhtml_function_coverage=1 00:08:21.004 --rc genhtml_legend=1 00:08:21.004 --rc geninfo_all_blocks=1 00:08:21.004 --rc geninfo_unexecuted_blocks=1 00:08:21.004 00:08:21.004 ' 00:08:21.004 19:28:14 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:21.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.004 --rc genhtml_branch_coverage=1 00:08:21.004 --rc genhtml_function_coverage=1 00:08:21.004 --rc genhtml_legend=1 00:08:21.004 --rc geninfo_all_blocks=1 00:08:21.004 --rc geninfo_unexecuted_blocks=1 00:08:21.004 00:08:21.004 ' 00:08:21.004 19:28:14 version -- app/version.sh@17 -- # get_header_version major 00:08:21.004 19:28:14 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:21.004 19:28:14 version -- app/version.sh@14 -- # cut -f2 00:08:21.004 19:28:14 version -- app/version.sh@14 -- # tr -d '"' 00:08:21.004 19:28:14 version -- app/version.sh@17 -- # major=25 00:08:21.004 19:28:14 version -- app/version.sh@18 -- # get_header_version minor 00:08:21.004 19:28:14 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:21.004 19:28:14 version -- app/version.sh@14 -- # tr -d '"' 00:08:21.004 19:28:14 version -- app/version.sh@14 -- # cut -f2 00:08:21.004 19:28:14 version -- app/version.sh@18 -- # minor=1 00:08:21.004 19:28:14 version -- app/version.sh@19 -- # get_header_version patch 00:08:21.004 19:28:14 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:21.004 19:28:14 version -- app/version.sh@14 -- # cut -f2 00:08:21.004 19:28:14 version -- app/version.sh@14 -- # tr -d '"' 00:08:21.004 19:28:14 version -- app/version.sh@19 -- # patch=0 00:08:21.004 19:28:14 version -- app/version.sh@20 -- # get_header_version suffix 00:08:21.004 19:28:14 version -- app/version.sh@14 -- # cut -f2 00:08:21.004 19:28:14 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:21.004 19:28:14 version -- app/version.sh@14 -- # tr -d '"' 00:08:21.004 19:28:14 version -- app/version.sh@20 -- # suffix=-pre 00:08:21.004 19:28:14 version -- app/version.sh@22 -- # version=25.1 00:08:21.004 19:28:14 version -- app/version.sh@25 -- # (( patch != 0 )) 00:08:21.004 19:28:14 version -- app/version.sh@28 -- # version=25.1rc0 00:08:21.004 19:28:14 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:08:21.004 19:28:14 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:21.004 19:28:14 version -- app/version.sh@30 -- # py_version=25.1rc0 00:08:21.004 19:28:14 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:08:21.004 00:08:21.004 real 0m0.278s 00:08:21.004 user 0m0.175s 00:08:21.004 sys 0m0.139s 00:08:21.004 ************************************ 00:08:21.004 END TEST version 00:08:21.004 ************************************ 00:08:21.004 19:28:14 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:21.004 19:28:14 version -- common/autotest_common.sh@10 -- # set +x 00:08:21.262 19:28:14 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:08:21.262 19:28:14 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:08:21.262 19:28:14 -- spdk/autotest.sh@194 -- # uname -s 00:08:21.262 19:28:14 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:08:21.262 19:28:14 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:08:21.262 19:28:14 -- spdk/autotest.sh@195 -- # [[ 1 -eq 1 ]] 00:08:21.262 19:28:14 -- spdk/autotest.sh@201 -- # [[ 0 -eq 0 ]] 00:08:21.262 19:28:14 -- spdk/autotest.sh@202 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:08:21.262 19:28:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:21.262 19:28:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:21.262 19:28:14 -- common/autotest_common.sh@10 -- # set +x 00:08:21.262 ************************************ 00:08:21.262 START TEST spdk_dd 00:08:21.262 ************************************ 00:08:21.262 19:28:14 spdk_dd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:08:21.262 * Looking for test storage... 00:08:21.262 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:21.262 19:28:14 spdk_dd -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:21.262 19:28:14 spdk_dd -- common/autotest_common.sh@1711 -- # lcov --version 00:08:21.262 19:28:14 spdk_dd -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:21.262 19:28:14 spdk_dd -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:21.262 19:28:14 spdk_dd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:21.262 19:28:14 spdk_dd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:21.262 19:28:14 spdk_dd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:21.262 19:28:14 spdk_dd -- scripts/common.sh@336 -- # IFS=.-: 00:08:21.262 19:28:14 spdk_dd -- scripts/common.sh@336 -- # read -ra ver1 00:08:21.262 19:28:14 spdk_dd -- scripts/common.sh@337 -- # IFS=.-: 00:08:21.262 19:28:14 spdk_dd -- scripts/common.sh@337 -- # read -ra ver2 00:08:21.262 19:28:14 spdk_dd -- scripts/common.sh@338 -- # local 'op=<' 00:08:21.262 19:28:14 spdk_dd -- scripts/common.sh@340 -- # ver1_l=2 00:08:21.262 19:28:14 spdk_dd -- scripts/common.sh@341 -- # ver2_l=1 00:08:21.262 19:28:14 spdk_dd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:21.262 19:28:14 spdk_dd -- scripts/common.sh@344 -- # case "$op" in 00:08:21.262 19:28:14 spdk_dd -- scripts/common.sh@345 -- # : 1 00:08:21.262 19:28:14 spdk_dd -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:21.262 19:28:14 spdk_dd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:21.262 19:28:14 spdk_dd -- scripts/common.sh@365 -- # decimal 1 00:08:21.262 19:28:14 spdk_dd -- scripts/common.sh@353 -- # local d=1 00:08:21.262 19:28:14 spdk_dd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:21.262 19:28:14 spdk_dd -- scripts/common.sh@355 -- # echo 1 00:08:21.262 19:28:14 spdk_dd -- scripts/common.sh@365 -- # ver1[v]=1 00:08:21.262 19:28:14 spdk_dd -- scripts/common.sh@366 -- # decimal 2 00:08:21.262 19:28:14 spdk_dd -- scripts/common.sh@353 -- # local d=2 00:08:21.262 19:28:14 spdk_dd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:21.262 19:28:14 spdk_dd -- scripts/common.sh@355 -- # echo 2 00:08:21.262 19:28:14 spdk_dd -- scripts/common.sh@366 -- # ver2[v]=2 00:08:21.262 19:28:14 spdk_dd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:21.262 19:28:14 spdk_dd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:21.262 19:28:14 spdk_dd -- scripts/common.sh@368 -- # return 0 00:08:21.262 19:28:14 spdk_dd -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:21.262 19:28:14 spdk_dd -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:21.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.262 --rc genhtml_branch_coverage=1 00:08:21.262 --rc genhtml_function_coverage=1 00:08:21.262 --rc genhtml_legend=1 00:08:21.262 --rc geninfo_all_blocks=1 00:08:21.262 --rc geninfo_unexecuted_blocks=1 00:08:21.262 00:08:21.262 ' 00:08:21.262 19:28:14 spdk_dd -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:21.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.262 --rc genhtml_branch_coverage=1 00:08:21.262 --rc genhtml_function_coverage=1 00:08:21.262 --rc genhtml_legend=1 00:08:21.262 --rc geninfo_all_blocks=1 00:08:21.262 --rc geninfo_unexecuted_blocks=1 00:08:21.262 00:08:21.262 ' 00:08:21.262 19:28:14 spdk_dd -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:21.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.262 --rc genhtml_branch_coverage=1 00:08:21.262 --rc genhtml_function_coverage=1 00:08:21.262 --rc genhtml_legend=1 00:08:21.262 --rc geninfo_all_blocks=1 00:08:21.262 --rc geninfo_unexecuted_blocks=1 00:08:21.262 00:08:21.262 ' 00:08:21.262 19:28:14 spdk_dd -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:21.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.262 --rc genhtml_branch_coverage=1 00:08:21.262 --rc genhtml_function_coverage=1 00:08:21.262 --rc genhtml_legend=1 00:08:21.262 --rc geninfo_all_blocks=1 00:08:21.262 --rc geninfo_unexecuted_blocks=1 00:08:21.262 00:08:21.262 ' 00:08:21.262 19:28:14 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:21.262 19:28:14 spdk_dd -- scripts/common.sh@15 -- # shopt -s extglob 00:08:21.262 19:28:14 spdk_dd -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:21.262 19:28:14 spdk_dd -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:21.262 19:28:14 spdk_dd -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:21.262 19:28:14 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.262 19:28:14 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.262 19:28:14 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.262 19:28:14 spdk_dd -- paths/export.sh@5 -- # export PATH 00:08:21.262 19:28:14 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.262 19:28:14 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:21.830 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:21.830 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:08:21.830 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:08:21.830 19:28:15 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:08:21.830 19:28:15 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:08:21.830 19:28:15 spdk_dd -- scripts/common.sh@312 -- # local bdf bdfs 00:08:21.830 19:28:15 spdk_dd -- scripts/common.sh@313 -- # local nvmes 00:08:21.830 19:28:15 spdk_dd -- scripts/common.sh@315 -- # [[ -n '' ]] 00:08:21.830 19:28:15 spdk_dd -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:08:21.830 19:28:15 spdk_dd -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:08:21.830 19:28:15 spdk_dd -- scripts/common.sh@298 -- # local bdf= 00:08:21.830 19:28:15 spdk_dd -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:08:21.830 19:28:15 spdk_dd -- scripts/common.sh@233 -- # local class 00:08:21.830 19:28:15 spdk_dd -- scripts/common.sh@234 -- # local subclass 00:08:21.830 19:28:15 spdk_dd -- scripts/common.sh@235 -- # local progif 00:08:21.830 19:28:15 spdk_dd -- scripts/common.sh@236 -- # printf %02x 1 00:08:21.830 19:28:15 spdk_dd -- scripts/common.sh@236 -- # class=01 00:08:21.830 19:28:15 spdk_dd -- scripts/common.sh@237 -- # printf %02x 8 00:08:21.830 19:28:15 spdk_dd -- scripts/common.sh@237 -- # subclass=08 00:08:21.830 19:28:15 spdk_dd -- scripts/common.sh@238 -- # printf %02x 2 00:08:21.830 19:28:15 spdk_dd -- scripts/common.sh@238 -- # progif=02 00:08:21.830 19:28:15 spdk_dd -- scripts/common.sh@240 -- # hash lspci 00:08:21.830 19:28:15 spdk_dd -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:08:21.830 19:28:15 spdk_dd -- scripts/common.sh@242 -- # lspci -mm -n -D 00:08:21.830 19:28:15 spdk_dd -- scripts/common.sh@243 -- # grep -i -- -p02 00:08:21.830 19:28:15 spdk_dd -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:08:21.830 19:28:15 spdk_dd -- scripts/common.sh@245 -- # tr -d '"' 00:08:21.830 19:28:15 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:08:21.830 19:28:15 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:08:21.830 19:28:15 spdk_dd -- scripts/common.sh@18 -- # local i 00:08:21.830 19:28:15 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:08:21.830 19:28:15 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:08:21.830 19:28:15 spdk_dd -- scripts/common.sh@27 -- # return 0 00:08:21.830 19:28:15 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:08:21.830 19:28:15 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:08:21.830 19:28:15 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:08:21.830 19:28:15 spdk_dd -- scripts/common.sh@18 -- # local i 00:08:21.830 19:28:15 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:08:21.830 19:28:15 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:08:21.830 19:28:15 spdk_dd -- scripts/common.sh@27 -- # return 0 00:08:21.830 19:28:15 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:08:21.830 19:28:15 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:08:21.830 19:28:15 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:08:21.830 19:28:15 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:08:21.830 19:28:15 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:08:21.830 19:28:15 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:08:21.830 19:28:15 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:08:21.830 19:28:15 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:08:21.830 19:28:15 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:08:21.830 19:28:15 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:08:21.830 19:28:15 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:08:21.830 19:28:15 spdk_dd -- scripts/common.sh@328 -- # (( 2 )) 00:08:21.830 19:28:15 spdk_dd -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:08:21.830 19:28:15 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:08:21.830 19:28:15 spdk_dd -- dd/common.sh@139 -- # local lib 00:08:21.830 19:28:15 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:08:21.830 19:28:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:21.830 19:28:15 spdk_dd -- dd/common.sh@137 -- # objdump -p /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:21.830 19:28:15 spdk_dd -- dd/common.sh@137 -- # grep NEEDED 00:08:21.830 19:28:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 00:08:21.830 19:28:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:21.830 19:28:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 00:08:21.830 19:28:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:21.830 19:28:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.1 == liburing.so.* ]] 00:08:21.830 19:28:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:21.830 19:28:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 00:08:21.830 19:28:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:21.830 19:28:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 00:08:21.830 19:28:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:21.830 19:28:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 00:08:21.830 19:28:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:21.830 19:28:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 00:08:21.830 19:28:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:21.830 19:28:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 00:08:21.830 19:28:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:21.830 19:28:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 00:08:21.830 19:28:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:21.830 19:28:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 00:08:21.830 19:28:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.11.0 == liburing.so.* ]] 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.12.0 == liburing.so.* ]] 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_lvol.so.11.0 == liburing.so.* ]] 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob.so.12.0 == liburing.so.* ]] 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_nvme.so.15.0 == liburing.so.* ]] 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_provider.so.7.0 == liburing.so.* ]] 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_utils.so.1.0 == liburing.so.* ]] 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.1 == liburing.so.* ]] 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.15.1 == liburing.so.* ]] 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.2.0 == liburing.so.* ]] 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_linux.so.1.0 == liburing.so.* ]] 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev_aio.so.1.0 == liburing.so.* ]] 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev.so.2.0 == liburing.so.* ]] 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event.so.14.0 == liburing.so.* ]] 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev.so.17.0 == liburing.so.* ]] 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel.so.16.0 == liburing.so.* ]] 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_dma.so.5.0 == liburing.so.* ]] 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock.so.10.0 == liburing.so.* ]] 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_init.so.6.0 == liburing.so.* ]] 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_thread.so.11.0 == liburing.so.* ]] 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_trace.so.11.0 == liburing.so.* ]] 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring.so.2.0 == liburing.so.* ]] 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_util.so.10.1 == liburing.so.* ]] 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_log.so.7.1 == liburing.so.* ]] 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:08:21.831 * spdk_dd linked to liburing 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:08:21.831 19:28:15 spdk_dd -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:08:21.831 19:28:15 spdk_dd -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:08:21.831 19:28:15 spdk_dd -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:08:21.831 19:28:15 spdk_dd -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:08:21.831 19:28:15 spdk_dd -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:08:21.831 19:28:15 spdk_dd -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:08:21.831 19:28:15 spdk_dd -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:08:21.831 19:28:15 spdk_dd -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:08:21.831 19:28:15 spdk_dd -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:08:21.831 19:28:15 spdk_dd -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:08:21.831 19:28:15 spdk_dd -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:08:21.831 19:28:15 spdk_dd -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:08:21.831 19:28:15 spdk_dd -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:08:21.831 19:28:15 spdk_dd -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:08:21.831 19:28:15 spdk_dd -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:08:21.831 19:28:15 spdk_dd -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:08:21.831 19:28:15 spdk_dd -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:08:21.831 19:28:15 spdk_dd -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:08:21.831 19:28:15 spdk_dd -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:08:21.831 19:28:15 spdk_dd -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:08:21.831 19:28:15 spdk_dd -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:08:21.831 19:28:15 spdk_dd -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:08:21.831 19:28:15 spdk_dd -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:08:21.831 19:28:15 spdk_dd -- common/build_config.sh@23 -- # CONFIG_CET=n 00:08:21.831 19:28:15 spdk_dd -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:08:21.831 19:28:15 spdk_dd -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:08:21.831 19:28:15 spdk_dd -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:08:21.831 19:28:15 spdk_dd -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:08:21.831 19:28:15 spdk_dd -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:08:21.831 19:28:15 spdk_dd -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:08:21.831 19:28:15 spdk_dd -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:08:21.831 19:28:15 spdk_dd -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:08:21.831 19:28:15 spdk_dd -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:08:21.831 19:28:15 spdk_dd -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:08:21.831 19:28:15 spdk_dd -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:08:21.831 19:28:15 spdk_dd -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:08:21.831 19:28:15 spdk_dd -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:08:21.831 19:28:15 spdk_dd -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:08:21.832 19:28:15 spdk_dd -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:08:21.832 19:28:15 spdk_dd -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:08:21.832 19:28:15 spdk_dd -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:08:21.832 19:28:15 spdk_dd -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:08:21.832 19:28:15 spdk_dd -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:08:21.832 19:28:15 spdk_dd -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:08:21.832 19:28:15 spdk_dd -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:08:21.832 19:28:15 spdk_dd -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:08:21.832 19:28:15 spdk_dd -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:08:21.832 19:28:15 spdk_dd -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:08:21.832 19:28:15 spdk_dd -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:08:21.832 19:28:15 spdk_dd -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:08:21.832 19:28:15 spdk_dd -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:08:21.832 19:28:15 spdk_dd -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:08:21.832 19:28:15 spdk_dd -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:08:21.832 19:28:15 spdk_dd -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:08:21.832 19:28:15 spdk_dd -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:08:21.832 19:28:15 spdk_dd -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:08:21.832 19:28:15 spdk_dd -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:08:21.832 19:28:15 spdk_dd -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:08:21.832 19:28:15 spdk_dd -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:08:21.832 19:28:15 spdk_dd -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:08:21.832 19:28:15 spdk_dd -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=y 00:08:21.832 19:28:15 spdk_dd -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:08:21.832 19:28:15 spdk_dd -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:08:21.832 19:28:15 spdk_dd -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:08:21.832 19:28:15 spdk_dd -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:08:21.832 19:28:15 spdk_dd -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:08:21.832 19:28:15 spdk_dd -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:08:21.832 19:28:15 spdk_dd -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:08:21.832 19:28:15 spdk_dd -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:08:21.832 19:28:15 spdk_dd -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:08:21.832 19:28:15 spdk_dd -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:08:21.832 19:28:15 spdk_dd -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:08:21.832 19:28:15 spdk_dd -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:08:21.832 19:28:15 spdk_dd -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:08:21.832 19:28:15 spdk_dd -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:08:21.832 19:28:15 spdk_dd -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:08:21.832 19:28:15 spdk_dd -- common/build_config.sh@76 -- # CONFIG_FC=n 00:08:21.832 19:28:15 spdk_dd -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:08:21.832 19:28:15 spdk_dd -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:08:21.832 19:28:15 spdk_dd -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:08:21.832 19:28:15 spdk_dd -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:08:21.832 19:28:15 spdk_dd -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:08:21.832 19:28:15 spdk_dd -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:08:21.832 19:28:15 spdk_dd -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:08:21.832 19:28:15 spdk_dd -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:08:21.832 19:28:15 spdk_dd -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:08:21.832 19:28:15 spdk_dd -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:08:21.832 19:28:15 spdk_dd -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:08:21.832 19:28:15 spdk_dd -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:08:21.832 19:28:15 spdk_dd -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:08:21.832 19:28:15 spdk_dd -- common/build_config.sh@90 -- # CONFIG_URING=y 00:08:21.832 19:28:15 spdk_dd -- dd/common.sh@149 -- # [[ y != y ]] 00:08:21.832 19:28:15 spdk_dd -- dd/common.sh@152 -- # export liburing_in_use=1 00:08:21.832 19:28:15 spdk_dd -- dd/common.sh@152 -- # liburing_in_use=1 00:08:21.832 19:28:15 spdk_dd -- dd/common.sh@153 -- # return 0 00:08:21.832 19:28:15 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:08:21.832 19:28:15 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:08:21.832 19:28:15 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:21.832 19:28:15 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:21.832 19:28:15 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:21.832 ************************************ 00:08:21.832 START TEST spdk_dd_basic_rw 00:08:21.832 ************************************ 00:08:21.832 19:28:15 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:08:21.832 * Looking for test storage... 00:08:21.832 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:21.832 19:28:15 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:22.091 19:28:15 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1711 -- # lcov --version 00:08:22.091 19:28:15 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:22.091 19:28:15 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:22.091 19:28:15 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:22.091 19:28:15 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:22.091 19:28:15 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:22.091 19:28:15 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # IFS=.-: 00:08:22.091 19:28:15 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # read -ra ver1 00:08:22.091 19:28:15 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # IFS=.-: 00:08:22.091 19:28:15 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # read -ra ver2 00:08:22.091 19:28:15 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@338 -- # local 'op=<' 00:08:22.091 19:28:15 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@340 -- # ver1_l=2 00:08:22.091 19:28:15 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@341 -- # ver2_l=1 00:08:22.091 19:28:15 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:22.091 19:28:15 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@344 -- # case "$op" in 00:08:22.091 19:28:15 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@345 -- # : 1 00:08:22.091 19:28:15 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:22.091 19:28:15 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:22.091 19:28:15 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # decimal 1 00:08:22.091 19:28:15 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=1 00:08:22.091 19:28:15 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:22.091 19:28:15 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 1 00:08:22.091 19:28:15 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # ver1[v]=1 00:08:22.091 19:28:15 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # decimal 2 00:08:22.091 19:28:15 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=2 00:08:22.091 19:28:15 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:22.091 19:28:15 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 2 00:08:22.091 19:28:15 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # ver2[v]=2 00:08:22.091 19:28:15 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:22.091 19:28:15 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:22.091 19:28:15 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # return 0 00:08:22.091 19:28:15 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:22.091 19:28:15 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:22.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.091 --rc genhtml_branch_coverage=1 00:08:22.091 --rc genhtml_function_coverage=1 00:08:22.091 --rc genhtml_legend=1 00:08:22.091 --rc geninfo_all_blocks=1 00:08:22.091 --rc geninfo_unexecuted_blocks=1 00:08:22.091 00:08:22.091 ' 00:08:22.091 19:28:15 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:22.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.091 --rc genhtml_branch_coverage=1 00:08:22.091 --rc genhtml_function_coverage=1 00:08:22.091 --rc genhtml_legend=1 00:08:22.091 --rc geninfo_all_blocks=1 00:08:22.091 --rc geninfo_unexecuted_blocks=1 00:08:22.091 00:08:22.091 ' 00:08:22.091 19:28:15 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:22.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.091 --rc genhtml_branch_coverage=1 00:08:22.091 --rc genhtml_function_coverage=1 00:08:22.091 --rc genhtml_legend=1 00:08:22.091 --rc geninfo_all_blocks=1 00:08:22.091 --rc geninfo_unexecuted_blocks=1 00:08:22.091 00:08:22.091 ' 00:08:22.091 19:28:15 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:22.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.091 --rc genhtml_branch_coverage=1 00:08:22.091 --rc genhtml_function_coverage=1 00:08:22.091 --rc genhtml_legend=1 00:08:22.091 --rc geninfo_all_blocks=1 00:08:22.091 --rc geninfo_unexecuted_blocks=1 00:08:22.091 00:08:22.091 ' 00:08:22.091 19:28:15 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:22.091 19:28:15 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@15 -- # shopt -s extglob 00:08:22.091 19:28:15 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:22.091 19:28:15 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:22.091 19:28:15 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:22.091 19:28:15 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.091 19:28:15 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.091 19:28:15 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.091 19:28:15 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:08:22.091 19:28:15 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.091 19:28:15 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:08:22.091 19:28:15 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:08:22.091 19:28:15 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:08:22.091 19:28:15 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:08:22.091 19:28:15 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:08:22.091 19:28:15 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:08:22.091 19:28:15 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:08:22.091 19:28:15 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:22.091 19:28:15 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:22.091 19:28:15 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:08:22.091 19:28:15 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:08:22.091 19:28:15 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:08:22.091 19:28:15 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:08:22.351 19:28:15 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:08:22.351 19:28:15 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:08:22.352 19:28:15 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:08:22.352 19:28:15 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:08:22.352 19:28:15 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:08:22.352 19:28:15 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:08:22.352 19:28:15 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:08:22.352 19:28:15 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:08:22.352 19:28:15 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:08:22.352 19:28:15 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:08:22.352 19:28:15 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:22.352 19:28:15 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:08:22.352 19:28:15 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:22.352 19:28:15 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:08:22.352 ************************************ 00:08:22.352 START TEST dd_bs_lt_native_bs 00:08:22.352 ************************************ 00:08:22.352 19:28:15 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1129 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:08:22.352 19:28:15 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@652 -- # local es=0 00:08:22.352 19:28:15 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:08:22.352 19:28:15 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:22.352 19:28:15 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:22.352 19:28:15 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:22.352 19:28:15 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:22.352 19:28:15 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:22.352 19:28:15 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:22.352 19:28:15 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:22.352 19:28:15 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:22.352 19:28:15 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:08:22.352 { 00:08:22.352 "subsystems": [ 00:08:22.352 { 00:08:22.352 "subsystem": "bdev", 00:08:22.352 "config": [ 00:08:22.352 { 00:08:22.352 "params": { 00:08:22.352 "trtype": "pcie", 00:08:22.352 "traddr": "0000:00:10.0", 00:08:22.352 "name": "Nvme0" 00:08:22.352 }, 00:08:22.352 "method": "bdev_nvme_attach_controller" 00:08:22.352 }, 00:08:22.352 { 00:08:22.352 "method": "bdev_wait_for_examine" 00:08:22.352 } 00:08:22.352 ] 00:08:22.352 } 00:08:22.352 ] 00:08:22.352 } 00:08:22.352 [2024-12-05 19:28:15.659859] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:08:22.352 [2024-12-05 19:28:15.659965] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59615 ] 00:08:22.610 [2024-12-05 19:28:15.815300] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.610 [2024-12-05 19:28:15.884100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.610 [2024-12-05 19:28:15.942097] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:22.868 [2024-12-05 19:28:16.057894] spdk_dd.c:1159:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:08:22.868 [2024-12-05 19:28:16.057993] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:22.868 [2024-12-05 19:28:16.185734] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:08:22.868 19:28:16 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@655 -- # es=234 00:08:22.868 19:28:16 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:22.868 19:28:16 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@664 -- # es=106 00:08:22.868 19:28:16 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@665 -- # case "$es" in 00:08:22.868 19:28:16 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@672 -- # es=1 00:08:22.868 ************************************ 00:08:22.868 END TEST dd_bs_lt_native_bs 00:08:22.868 ************************************ 00:08:22.868 19:28:16 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:22.868 00:08:22.868 real 0m0.658s 00:08:22.868 user 0m0.449s 00:08:22.868 sys 0m0.164s 00:08:22.868 19:28:16 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:22.868 19:28:16 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:08:23.127 19:28:16 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:08:23.127 19:28:16 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:23.127 19:28:16 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:23.127 19:28:16 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:08:23.127 ************************************ 00:08:23.127 START TEST dd_rw 00:08:23.127 ************************************ 00:08:23.127 19:28:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1129 -- # basic_rw 4096 00:08:23.127 19:28:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:08:23.127 19:28:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:08:23.127 19:28:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:08:23.127 19:28:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:08:23.127 19:28:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:08:23.127 19:28:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:08:23.127 19:28:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:08:23.127 19:28:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:08:23.127 19:28:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:08:23.127 19:28:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:08:23.127 19:28:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:08:23.127 19:28:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:23.127 19:28:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:08:23.127 19:28:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:08:23.127 19:28:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:08:23.127 19:28:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:08:23.127 19:28:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:08:23.127 19:28:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:23.695 19:28:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:08:23.695 19:28:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:08:23.695 19:28:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:23.695 19:28:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:23.695 [2024-12-05 19:28:16.995394] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:08:23.695 [2024-12-05 19:28:16.996330] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59653 ] 00:08:23.695 { 00:08:23.695 "subsystems": [ 00:08:23.695 { 00:08:23.695 "subsystem": "bdev", 00:08:23.695 "config": [ 00:08:23.695 { 00:08:23.695 "params": { 00:08:23.695 "trtype": "pcie", 00:08:23.695 "traddr": "0000:00:10.0", 00:08:23.695 "name": "Nvme0" 00:08:23.695 }, 00:08:23.695 "method": "bdev_nvme_attach_controller" 00:08:23.695 }, 00:08:23.695 { 00:08:23.695 "method": "bdev_wait_for_examine" 00:08:23.695 } 00:08:23.695 ] 00:08:23.695 } 00:08:23.695 ] 00:08:23.695 } 00:08:23.953 [2024-12-05 19:28:17.144299] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.953 [2024-12-05 19:28:17.198659] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.953 [2024-12-05 19:28:17.251554] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:23.953  [2024-12-05T19:28:17.649Z] Copying: 60/60 [kB] (average 29 MBps) 00:08:24.212 00:08:24.212 19:28:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:08:24.212 19:28:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:08:24.212 19:28:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:24.212 19:28:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:24.212 [2024-12-05 19:28:17.625092] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:08:24.212 [2024-12-05 19:28:17.625206] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59661 ] 00:08:24.212 { 00:08:24.212 "subsystems": [ 00:08:24.212 { 00:08:24.212 "subsystem": "bdev", 00:08:24.212 "config": [ 00:08:24.212 { 00:08:24.212 "params": { 00:08:24.212 "trtype": "pcie", 00:08:24.212 "traddr": "0000:00:10.0", 00:08:24.212 "name": "Nvme0" 00:08:24.212 }, 00:08:24.212 "method": "bdev_nvme_attach_controller" 00:08:24.212 }, 00:08:24.212 { 00:08:24.212 "method": "bdev_wait_for_examine" 00:08:24.212 } 00:08:24.212 ] 00:08:24.212 } 00:08:24.212 ] 00:08:24.212 } 00:08:24.470 [2024-12-05 19:28:17.776146] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.470 [2024-12-05 19:28:17.826351] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.470 [2024-12-05 19:28:17.884141] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:24.728  [2024-12-05T19:28:18.423Z] Copying: 60/60 [kB] (average 19 MBps) 00:08:24.986 00:08:24.986 19:28:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:24.986 19:28:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:08:24.986 19:28:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:24.986 19:28:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:08:24.986 19:28:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:08:24.986 19:28:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:08:24.986 19:28:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:08:24.986 19:28:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:24.986 19:28:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:08:24.986 19:28:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:24.986 19:28:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:24.986 [2024-12-05 19:28:18.250951] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:08:24.986 [2024-12-05 19:28:18.251367] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59682 ] 00:08:24.986 { 00:08:24.986 "subsystems": [ 00:08:24.986 { 00:08:24.986 "subsystem": "bdev", 00:08:24.986 "config": [ 00:08:24.986 { 00:08:24.986 "params": { 00:08:24.986 "trtype": "pcie", 00:08:24.986 "traddr": "0000:00:10.0", 00:08:24.986 "name": "Nvme0" 00:08:24.986 }, 00:08:24.986 "method": "bdev_nvme_attach_controller" 00:08:24.986 }, 00:08:24.986 { 00:08:24.986 "method": "bdev_wait_for_examine" 00:08:24.986 } 00:08:24.986 ] 00:08:24.986 } 00:08:24.986 ] 00:08:24.986 } 00:08:24.986 [2024-12-05 19:28:18.396033] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.245 [2024-12-05 19:28:18.454593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.245 [2024-12-05 19:28:18.508003] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:25.245  [2024-12-05T19:28:18.940Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:08:25.503 00:08:25.503 19:28:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:25.503 19:28:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:08:25.503 19:28:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:08:25.503 19:28:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:08:25.503 19:28:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:08:25.503 19:28:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:08:25.503 19:28:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:26.086 19:28:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:08:26.086 19:28:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:08:26.086 19:28:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:26.086 19:28:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:26.086 [2024-12-05 19:28:19.460698] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:08:26.086 [2024-12-05 19:28:19.460796] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59701 ] 00:08:26.086 { 00:08:26.086 "subsystems": [ 00:08:26.086 { 00:08:26.086 "subsystem": "bdev", 00:08:26.086 "config": [ 00:08:26.086 { 00:08:26.086 "params": { 00:08:26.086 "trtype": "pcie", 00:08:26.086 "traddr": "0000:00:10.0", 00:08:26.086 "name": "Nvme0" 00:08:26.086 }, 00:08:26.086 "method": "bdev_nvme_attach_controller" 00:08:26.086 }, 00:08:26.086 { 00:08:26.086 "method": "bdev_wait_for_examine" 00:08:26.086 } 00:08:26.086 ] 00:08:26.086 } 00:08:26.086 ] 00:08:26.086 } 00:08:26.344 [2024-12-05 19:28:19.612350] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.344 [2024-12-05 19:28:19.669767] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.344 [2024-12-05 19:28:19.725214] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:26.603  [2024-12-05T19:28:20.040Z] Copying: 60/60 [kB] (average 58 MBps) 00:08:26.603 00:08:26.603 19:28:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:08:26.603 19:28:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:08:26.603 19:28:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:26.603 19:28:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:26.863 { 00:08:26.863 "subsystems": [ 00:08:26.863 { 00:08:26.863 "subsystem": "bdev", 00:08:26.863 "config": [ 00:08:26.863 { 00:08:26.863 "params": { 00:08:26.863 "trtype": "pcie", 00:08:26.863 "traddr": "0000:00:10.0", 00:08:26.863 "name": "Nvme0" 00:08:26.863 }, 00:08:26.863 "method": "bdev_nvme_attach_controller" 00:08:26.863 }, 00:08:26.863 { 00:08:26.863 "method": "bdev_wait_for_examine" 00:08:26.863 } 00:08:26.863 ] 00:08:26.863 } 00:08:26.863 ] 00:08:26.863 } 00:08:26.863 [2024-12-05 19:28:20.081159] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:08:26.863 [2024-12-05 19:28:20.081282] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59720 ] 00:08:26.863 [2024-12-05 19:28:20.228921] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.863 [2024-12-05 19:28:20.275362] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.122 [2024-12-05 19:28:20.330416] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:27.123  [2024-12-05T19:28:20.818Z] Copying: 60/60 [kB] (average 58 MBps) 00:08:27.381 00:08:27.381 19:28:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:27.381 19:28:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:08:27.381 19:28:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:27.381 19:28:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:08:27.381 19:28:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:08:27.381 19:28:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:08:27.382 19:28:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:08:27.382 19:28:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:27.382 19:28:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:08:27.382 19:28:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:27.382 19:28:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:27.382 [2024-12-05 19:28:20.712450] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:08:27.382 [2024-12-05 19:28:20.713354] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59730 ] 00:08:27.382 { 00:08:27.382 "subsystems": [ 00:08:27.382 { 00:08:27.382 "subsystem": "bdev", 00:08:27.382 "config": [ 00:08:27.382 { 00:08:27.382 "params": { 00:08:27.382 "trtype": "pcie", 00:08:27.382 "traddr": "0000:00:10.0", 00:08:27.382 "name": "Nvme0" 00:08:27.382 }, 00:08:27.382 "method": "bdev_nvme_attach_controller" 00:08:27.382 }, 00:08:27.382 { 00:08:27.382 "method": "bdev_wait_for_examine" 00:08:27.382 } 00:08:27.382 ] 00:08:27.382 } 00:08:27.382 ] 00:08:27.382 } 00:08:27.641 [2024-12-05 19:28:20.862871] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.641 [2024-12-05 19:28:20.920665] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.641 [2024-12-05 19:28:20.973987] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:27.899  [2024-12-05T19:28:21.336Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:27.899 00:08:27.899 19:28:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:08:27.899 19:28:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:27.899 19:28:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:08:27.899 19:28:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:08:27.899 19:28:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:08:27.899 19:28:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:08:27.899 19:28:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:08:27.899 19:28:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:28.466 19:28:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:08:28.466 19:28:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:08:28.466 19:28:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:28.467 19:28:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:28.467 [2024-12-05 19:28:21.885264] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:08:28.467 [2024-12-05 19:28:21.886069] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59749 ] 00:08:28.467 { 00:08:28.467 "subsystems": [ 00:08:28.467 { 00:08:28.467 "subsystem": "bdev", 00:08:28.467 "config": [ 00:08:28.467 { 00:08:28.467 "params": { 00:08:28.467 "trtype": "pcie", 00:08:28.467 "traddr": "0000:00:10.0", 00:08:28.467 "name": "Nvme0" 00:08:28.467 }, 00:08:28.467 "method": "bdev_nvme_attach_controller" 00:08:28.467 }, 00:08:28.467 { 00:08:28.467 "method": "bdev_wait_for_examine" 00:08:28.467 } 00:08:28.467 ] 00:08:28.467 } 00:08:28.467 ] 00:08:28.467 } 00:08:28.726 [2024-12-05 19:28:22.035821] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.726 [2024-12-05 19:28:22.095124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.726 [2024-12-05 19:28:22.152446] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:28.985  [2024-12-05T19:28:22.689Z] Copying: 56/56 [kB] (average 54 MBps) 00:08:29.252 00:08:29.252 19:28:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:08:29.252 19:28:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:08:29.252 19:28:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:29.252 19:28:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:29.252 [2024-12-05 19:28:22.530979] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:08:29.252 [2024-12-05 19:28:22.531095] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59768 ] 00:08:29.252 { 00:08:29.252 "subsystems": [ 00:08:29.252 { 00:08:29.252 "subsystem": "bdev", 00:08:29.252 "config": [ 00:08:29.252 { 00:08:29.252 "params": { 00:08:29.252 "trtype": "pcie", 00:08:29.252 "traddr": "0000:00:10.0", 00:08:29.252 "name": "Nvme0" 00:08:29.252 }, 00:08:29.252 "method": "bdev_nvme_attach_controller" 00:08:29.252 }, 00:08:29.252 { 00:08:29.252 "method": "bdev_wait_for_examine" 00:08:29.252 } 00:08:29.252 ] 00:08:29.252 } 00:08:29.252 ] 00:08:29.252 } 00:08:29.252 [2024-12-05 19:28:22.678392] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.511 [2024-12-05 19:28:22.730719] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.511 [2024-12-05 19:28:22.784845] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:29.511  [2024-12-05T19:28:23.207Z] Copying: 56/56 [kB] (average 54 MBps) 00:08:29.770 00:08:29.770 19:28:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:29.770 19:28:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:08:29.770 19:28:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:29.770 19:28:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:08:29.770 19:28:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:08:29.770 19:28:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:08:29.770 19:28:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:08:29.770 19:28:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:08:29.770 19:28:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:29.770 19:28:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:29.770 19:28:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:29.770 { 00:08:29.770 "subsystems": [ 00:08:29.770 { 00:08:29.770 "subsystem": "bdev", 00:08:29.770 "config": [ 00:08:29.770 { 00:08:29.770 "params": { 00:08:29.770 "trtype": "pcie", 00:08:29.770 "traddr": "0000:00:10.0", 00:08:29.770 "name": "Nvme0" 00:08:29.770 }, 00:08:29.770 "method": "bdev_nvme_attach_controller" 00:08:29.770 }, 00:08:29.770 { 00:08:29.770 "method": "bdev_wait_for_examine" 00:08:29.770 } 00:08:29.770 ] 00:08:29.770 } 00:08:29.770 ] 00:08:29.770 } 00:08:29.770 [2024-12-05 19:28:23.170075] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:08:29.770 [2024-12-05 19:28:23.170944] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59789 ] 00:08:30.029 [2024-12-05 19:28:23.318335] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.029 [2024-12-05 19:28:23.367198] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.029 [2024-12-05 19:28:23.421448] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:30.287  [2024-12-05T19:28:23.724Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:30.287 00:08:30.546 19:28:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:30.546 19:28:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:08:30.546 19:28:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:08:30.546 19:28:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:08:30.546 19:28:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:08:30.546 19:28:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:08:30.546 19:28:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:30.867 19:28:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:08:30.867 19:28:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:08:30.867 19:28:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:30.867 19:28:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:31.142 [2024-12-05 19:28:24.325497] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:08:31.142 [2024-12-05 19:28:24.325603] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59808 ] 00:08:31.142 { 00:08:31.142 "subsystems": [ 00:08:31.142 { 00:08:31.142 "subsystem": "bdev", 00:08:31.142 "config": [ 00:08:31.142 { 00:08:31.142 "params": { 00:08:31.142 "trtype": "pcie", 00:08:31.142 "traddr": "0000:00:10.0", 00:08:31.142 "name": "Nvme0" 00:08:31.142 }, 00:08:31.142 "method": "bdev_nvme_attach_controller" 00:08:31.142 }, 00:08:31.142 { 00:08:31.142 "method": "bdev_wait_for_examine" 00:08:31.142 } 00:08:31.142 ] 00:08:31.142 } 00:08:31.142 ] 00:08:31.142 } 00:08:31.142 [2024-12-05 19:28:24.469505] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.142 [2024-12-05 19:28:24.519347] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.142 [2024-12-05 19:28:24.575751] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:31.400  [2024-12-05T19:28:25.097Z] Copying: 56/56 [kB] (average 54 MBps) 00:08:31.660 00:08:31.660 19:28:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:08:31.660 19:28:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:08:31.660 19:28:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:31.660 19:28:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:31.660 [2024-12-05 19:28:24.938392] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:08:31.660 [2024-12-05 19:28:24.938499] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59816 ] 00:08:31.660 { 00:08:31.660 "subsystems": [ 00:08:31.660 { 00:08:31.660 "subsystem": "bdev", 00:08:31.660 "config": [ 00:08:31.660 { 00:08:31.660 "params": { 00:08:31.660 "trtype": "pcie", 00:08:31.660 "traddr": "0000:00:10.0", 00:08:31.660 "name": "Nvme0" 00:08:31.660 }, 00:08:31.660 "method": "bdev_nvme_attach_controller" 00:08:31.660 }, 00:08:31.660 { 00:08:31.660 "method": "bdev_wait_for_examine" 00:08:31.660 } 00:08:31.660 ] 00:08:31.660 } 00:08:31.660 ] 00:08:31.660 } 00:08:31.660 [2024-12-05 19:28:25.086116] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.918 [2024-12-05 19:28:25.130808] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.918 [2024-12-05 19:28:25.181931] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:31.918  [2024-12-05T19:28:25.612Z] Copying: 56/56 [kB] (average 54 MBps) 00:08:32.175 00:08:32.175 19:28:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:32.175 19:28:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:08:32.175 19:28:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:32.175 19:28:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:08:32.175 19:28:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:08:32.175 19:28:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:08:32.175 19:28:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:08:32.175 19:28:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:08:32.176 19:28:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:32.176 19:28:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:32.176 19:28:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:32.176 { 00:08:32.176 "subsystems": [ 00:08:32.176 { 00:08:32.176 "subsystem": "bdev", 00:08:32.176 "config": [ 00:08:32.176 { 00:08:32.176 "params": { 00:08:32.176 "trtype": "pcie", 00:08:32.176 "traddr": "0000:00:10.0", 00:08:32.176 "name": "Nvme0" 00:08:32.176 }, 00:08:32.176 "method": "bdev_nvme_attach_controller" 00:08:32.176 }, 00:08:32.176 { 00:08:32.176 "method": "bdev_wait_for_examine" 00:08:32.176 } 00:08:32.176 ] 00:08:32.176 } 00:08:32.176 ] 00:08:32.176 } 00:08:32.176 [2024-12-05 19:28:25.541079] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:08:32.176 [2024-12-05 19:28:25.541202] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59837 ] 00:08:32.434 [2024-12-05 19:28:25.687937] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.434 [2024-12-05 19:28:25.744061] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.434 [2024-12-05 19:28:25.796762] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:32.694  [2024-12-05T19:28:26.131Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:08:32.694 00:08:32.694 19:28:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:08:32.694 19:28:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:32.694 19:28:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:08:32.694 19:28:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:08:32.694 19:28:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:08:32.694 19:28:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:08:32.694 19:28:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:08:32.694 19:28:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:33.261 19:28:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:08:33.261 19:28:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:08:33.261 19:28:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:33.261 19:28:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:33.261 [2024-12-05 19:28:26.695658] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:08:33.261 [2024-12-05 19:28:26.696601] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59856 ] 00:08:33.519 { 00:08:33.519 "subsystems": [ 00:08:33.519 { 00:08:33.519 "subsystem": "bdev", 00:08:33.519 "config": [ 00:08:33.519 { 00:08:33.519 "params": { 00:08:33.519 "trtype": "pcie", 00:08:33.519 "traddr": "0000:00:10.0", 00:08:33.519 "name": "Nvme0" 00:08:33.519 }, 00:08:33.519 "method": "bdev_nvme_attach_controller" 00:08:33.519 }, 00:08:33.519 { 00:08:33.519 "method": "bdev_wait_for_examine" 00:08:33.519 } 00:08:33.519 ] 00:08:33.519 } 00:08:33.519 ] 00:08:33.519 } 00:08:33.519 [2024-12-05 19:28:26.847741] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.519 [2024-12-05 19:28:26.901733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.778 [2024-12-05 19:28:26.962409] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:33.778  [2024-12-05T19:28:27.473Z] Copying: 48/48 [kB] (average 46 MBps) 00:08:34.037 00:08:34.037 19:28:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:08:34.037 19:28:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:08:34.037 19:28:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:34.037 19:28:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:34.037 [2024-12-05 19:28:27.317854] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:08:34.037 [2024-12-05 19:28:27.318393] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59875 ] 00:08:34.037 { 00:08:34.037 "subsystems": [ 00:08:34.037 { 00:08:34.037 "subsystem": "bdev", 00:08:34.037 "config": [ 00:08:34.037 { 00:08:34.037 "params": { 00:08:34.037 "trtype": "pcie", 00:08:34.037 "traddr": "0000:00:10.0", 00:08:34.037 "name": "Nvme0" 00:08:34.037 }, 00:08:34.037 "method": "bdev_nvme_attach_controller" 00:08:34.037 }, 00:08:34.037 { 00:08:34.037 "method": "bdev_wait_for_examine" 00:08:34.037 } 00:08:34.037 ] 00:08:34.037 } 00:08:34.037 ] 00:08:34.037 } 00:08:34.037 [2024-12-05 19:28:27.459799] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.295 [2024-12-05 19:28:27.511542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.295 [2024-12-05 19:28:27.566439] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:34.295  [2024-12-05T19:28:27.991Z] Copying: 48/48 [kB] (average 46 MBps) 00:08:34.554 00:08:34.555 19:28:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:34.555 19:28:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:08:34.555 19:28:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:34.555 19:28:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:08:34.555 19:28:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:08:34.555 19:28:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:08:34.555 19:28:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:08:34.555 19:28:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:34.555 19:28:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:08:34.555 19:28:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:34.555 19:28:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:34.555 { 00:08:34.555 "subsystems": [ 00:08:34.555 { 00:08:34.555 "subsystem": "bdev", 00:08:34.555 "config": [ 00:08:34.555 { 00:08:34.555 "params": { 00:08:34.555 "trtype": "pcie", 00:08:34.555 "traddr": "0000:00:10.0", 00:08:34.555 "name": "Nvme0" 00:08:34.555 }, 00:08:34.555 "method": "bdev_nvme_attach_controller" 00:08:34.555 }, 00:08:34.555 { 00:08:34.555 "method": "bdev_wait_for_examine" 00:08:34.555 } 00:08:34.555 ] 00:08:34.555 } 00:08:34.555 ] 00:08:34.555 } 00:08:34.555 [2024-12-05 19:28:27.938418] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:08:34.555 [2024-12-05 19:28:27.938920] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59885 ] 00:08:34.813 [2024-12-05 19:28:28.088085] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.813 [2024-12-05 19:28:28.147840] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.813 [2024-12-05 19:28:28.204980] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:35.072  [2024-12-05T19:28:28.510Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:35.073 00:08:35.073 19:28:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:35.073 19:28:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:08:35.073 19:28:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:08:35.073 19:28:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:08:35.073 19:28:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:08:35.331 19:28:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:08:35.331 19:28:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:35.607 19:28:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:08:35.607 19:28:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:08:35.607 19:28:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:35.607 19:28:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:35.607 [2024-12-05 19:28:28.997193] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:08:35.607 [2024-12-05 19:28:28.997488] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59904 ] 00:08:35.607 { 00:08:35.607 "subsystems": [ 00:08:35.607 { 00:08:35.607 "subsystem": "bdev", 00:08:35.607 "config": [ 00:08:35.607 { 00:08:35.607 "params": { 00:08:35.607 "trtype": "pcie", 00:08:35.607 "traddr": "0000:00:10.0", 00:08:35.607 "name": "Nvme0" 00:08:35.607 }, 00:08:35.607 "method": "bdev_nvme_attach_controller" 00:08:35.607 }, 00:08:35.607 { 00:08:35.607 "method": "bdev_wait_for_examine" 00:08:35.607 } 00:08:35.607 ] 00:08:35.607 } 00:08:35.607 ] 00:08:35.607 } 00:08:35.879 [2024-12-05 19:28:29.144483] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.879 [2024-12-05 19:28:29.201779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.879 [2024-12-05 19:28:29.256113] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:36.137  [2024-12-05T19:28:29.574Z] Copying: 48/48 [kB] (average 46 MBps) 00:08:36.137 00:08:36.137 19:28:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:08:36.137 19:28:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:08:36.137 19:28:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:36.137 19:28:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:36.395 [2024-12-05 19:28:29.615499] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:08:36.395 [2024-12-05 19:28:29.615598] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59923 ] 00:08:36.395 { 00:08:36.395 "subsystems": [ 00:08:36.395 { 00:08:36.395 "subsystem": "bdev", 00:08:36.395 "config": [ 00:08:36.396 { 00:08:36.396 "params": { 00:08:36.396 "trtype": "pcie", 00:08:36.396 "traddr": "0000:00:10.0", 00:08:36.396 "name": "Nvme0" 00:08:36.396 }, 00:08:36.396 "method": "bdev_nvme_attach_controller" 00:08:36.396 }, 00:08:36.396 { 00:08:36.396 "method": "bdev_wait_for_examine" 00:08:36.396 } 00:08:36.396 ] 00:08:36.396 } 00:08:36.396 ] 00:08:36.396 } 00:08:36.396 [2024-12-05 19:28:29.763437] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.396 [2024-12-05 19:28:29.822224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.654 [2024-12-05 19:28:29.886641] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:36.654  [2024-12-05T19:28:30.349Z] Copying: 48/48 [kB] (average 46 MBps) 00:08:36.912 00:08:36.912 19:28:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:36.912 19:28:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:08:36.912 19:28:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:36.912 19:28:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:08:36.912 19:28:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:08:36.912 19:28:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:08:36.912 19:28:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:08:36.912 19:28:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:08:36.912 19:28:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:36.912 19:28:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:36.912 19:28:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:36.912 [2024-12-05 19:28:30.259349] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:08:36.912 [2024-12-05 19:28:30.259467] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59943 ] 00:08:36.912 { 00:08:36.912 "subsystems": [ 00:08:36.912 { 00:08:36.912 "subsystem": "bdev", 00:08:36.912 "config": [ 00:08:36.912 { 00:08:36.912 "params": { 00:08:36.912 "trtype": "pcie", 00:08:36.912 "traddr": "0000:00:10.0", 00:08:36.912 "name": "Nvme0" 00:08:36.912 }, 00:08:36.912 "method": "bdev_nvme_attach_controller" 00:08:36.912 }, 00:08:36.912 { 00:08:36.912 "method": "bdev_wait_for_examine" 00:08:36.912 } 00:08:36.912 ] 00:08:36.912 } 00:08:36.912 ] 00:08:36.912 } 00:08:37.170 [2024-12-05 19:28:30.398416] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:37.170 [2024-12-05 19:28:30.448388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.170 [2024-12-05 19:28:30.501997] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:37.496  [2024-12-05T19:28:30.933Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:37.496 00:08:37.496 ************************************ 00:08:37.496 END TEST dd_rw 00:08:37.496 ************************************ 00:08:37.496 00:08:37.496 real 0m14.491s 00:08:37.496 user 0m10.583s 00:08:37.496 sys 0m5.553s 00:08:37.496 19:28:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:37.496 19:28:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:37.496 19:28:30 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:08:37.496 19:28:30 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:37.496 19:28:30 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:37.496 19:28:30 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:08:37.496 ************************************ 00:08:37.496 START TEST dd_rw_offset 00:08:37.496 ************************************ 00:08:37.496 19:28:30 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1129 -- # basic_offset 00:08:37.496 19:28:30 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:08:37.496 19:28:30 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:08:37.496 19:28:30 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:08:37.496 19:28:30 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:08:37.496 19:28:30 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:08:37.496 19:28:30 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=ghxjwf7yomrnq5gy1ia6cout9qrcmokhlo2h1xtbner15klk2jf8mdi2kfvcwj961jiiealtluqhzxybprqn50w0v8gq72m53dgbppuwt359offjhsamqxeja2e1l3h1znsk2qdgi1v14yb66dp5fdkv9n1e96qgqn1ex4qu4vtvoi8jy72kkoj8wl1wj7aa412k6hf0y23pa4lo27gwwx2nucgorqeu9e78lcpruynm0pr82n0bphj3prn4iwob2cq0qnm26n94hznyyqe7gxw4gruvnyhgnpazmyibpg1cc28p4nx86itryw9w5doq1ji2629c1145lfgwbphwboup6nplpirkicnfpgqf69lhht88cemyos8twlihjuwqnv69dv50c97opt97tcchki14y65mew9folpbshacieldk4svj45gfzns7jjmm7sveyskqh4k772b8mt7d0xodxinx48rba9ctoqgcqut5uthdo7hlzc2c5xfoj4umbfsq8lpzf439qjz2x6pjbd90163cg8c2cuvun4dzuz2n4x3kgf24tciccnir7h2mgjj5axzwdzo0061v2smaqk7m435bql0el23beucxhki8h50bfae6k2t2sxfrwuxb56g1tpuvylenaos69jxdo3r7msrc0u2w4vgplu1lhdtshu2hrmymkbir0y1ig975nutka1ozn7ulqukqw6did23qqp5u58yd1q7lctv4jo3apm0oytofm15rbw9lmdbskzrf8iynve1i257ksqvmeqn5t6mkrtf51j7yvxnjo7rf9r7cxqpy9i7to78pgjf2qlnczovb8p4cgo0ggy1fl0l14ergws22rfjj3cs5e5p339ku3e64p651831exsncqeokvxlqkh2v1hvjaseks53ukyapm190rzrizuuxvde91qlut90uo7psj2qh3lxauxm8qsuwdo21d0vup6fyb4inquyydqtrnty8jww5xl21j70asw488vhcjwwpponbosudzrf5p7t0cpp7kvbh93ekbopoqno8yozy7v4ekfwhlyb3t0sg9kcwl48qt1z1lkbjb8y0p8ihbkqso6lbksg24rbh4xsh4bqqodn2vlqeuc5px4rkfpbvm0umxharcg5rs15ydoct4f3zskzfgl5tclfj8ez1e1zekw9vo3cforl7t41rwd9m25cmijjltxq5yetbo85x2wthumj2ycglof4i2gqxoi9hgni4ty5argc1lyabt5c2cziyxeud2stbwom8q1otn972bazvx8v5mo92bj0m1v8ayuc39ulvyxz1jlebj7t3qbs4dns8suaaop4g8ca540a7ekr6j1emks3z5nvwqea2qa0wvk0ncgeg5mtmdbdw36ti746zzbni22qwhb6b22mj7u1qiiannhw77nzenppcw7sqyjryxlgqt09ipbjuww87flt0s2aupw9xg0hub4s6nzj00cr5bfnqdpubf1vb6ovveyoqz1ukwlblouv3x20vcuffc136fdn00n7w7zgccj1b35yc9yxfl8jzr880b1bwcrpn5m1upd4p9ryuw0r7fwemji2f9q2a0snct6lphpoox4dw4806refhlhcd5hy1d03a0h50unjczs1ev5key0gefmx5l5btvrbpdyxqdcp9g3bg3ts2zp0ds2d5gaoyu869580o19mpc9mqwdvg1n1mol1zmddo0yrax5qsos2s1ngn6xezi32eqpvkcthbnw7fzv717lu1lchl6yidz0zgph729c20jztmcfh6l37a3g8w039p0lotdf5q5lq0qxa5qq3i23hygfdpqr8oa7ehged4p8qy4exdiqiceipxielbscya0b48gvjetgvsob51rj5slx6bg280qucdskxkk1av3kqyotv4h66yk741o3z7x6rvwb0e4aasyx446fhka8nw4qd7qymrzvi31s1q9r16dm6jqy9hx6yokuedelfpmyeasd389ikc1erbt8en5lsxhz85ztvkadjoyc1684kh9aaz8u0ri7k2o9yduq2ad2bexpp85f94mnbqm8aijucofx25o7qfqjnxw7h476zolb5rhu1cdsqy0ccmtx85g5gfiw0dtcyarkbazfw2le9trpskreeg0nea5rlfrcgmsxvvs6bf96rav1ncn45q4tv37oimof0m5utqgoksbypijetgg1gptaj94c8o6tlnefzq067n2s87pt8mqfbrdx3pvlqkgmgftdvua1tz1qu870j0f16jhqq8nhbwen1h3i9xd5jdt6c45detv2xmo3xrshkwxvg5t1nha4udzqkyorwl057k210y85ekn7jeryokjzvxclayi65llagpe8nzvcyc5odidp0rhvo69hhj6ub2yc2plbj1e3ovhdf7m9bpvv8xyf2a0u89gkzz6sf7ltsqtve55solool3lap3a7tciaz5h74keiet29o4jg82jcvg6y5sqo2e1310pub0ej8tjvorxeidy07d54vg3ebuhx1w5chzrvt7uneyc1xk8lv4j289jqra3qxrj426c17ahu9dauanzpv7gehltmi7x6u6wcb22fhzyoc2jrqyjo04jmkk0bowvo7i2owhqg05a2d989i30kcbu03oualfmz2m0rjoni380wdd3na766ox0r3gg4xe0p2sh1u33rnwze5mkq5wvximsl216vu2buf7i1rpyic06uidn5yf5vzb9u60ys6n3y6ks0b1jw7bf1mu81pt10rfbdhaskia0n8qxpnhlivpwefexc7p14mzq7ekhuif6gocs1rbdc927wwcmb9kr262czr5j5hhxuvvyvjjacik456t9fdetqarw8h9m10rj6mrzpuiot3w4hmwohupln6t8rf9cv8je3t881exopvblwm6cxvxrbw8dsohws514iezf7bv35b9wa7mva9qdvv3aog995nw3nonj4pofvuekkzansnjhukf6g5vge2o08pv7m2s8ajug7hm4avfrdo2gs0se0i291ha8qowd6hfln0ymjsyysomctt7fll9bugn5f89br5wwt2b5i8mkjbv2st4igwbevvrajsd3r90u2uy0ws42j2q5snd26dv8nx8c1k96cfd9cl860ey1g4oohfdsfbg11q6hmj3pgmjyy6z4w2w1d9kb1gx09qbk1mwd1x2vvtlfzaf2rhbgqs8o6pqlfbznjj1cgqr1cyfrg42ry92or2idnmmtv1cplq33j5l2kuzhe8eeambxjpzj95p36yiqipje3hqltmk11vjfkie9dnuo17xl98cez6bheg72r7r6nljozyu7wmmnvxt22hpxg8nbfq3zeskntry216jy7kh1i6lkkh2nh0x9pza85lcluwj8eg8y3dlobis1glv3ik4dmdkqpz34lcihqvtymwb8ht4i0ew42omtwoppntr8fsxr1895pnfnh67y223d5rznpa6iu0ajksgzg3ohb3u1g5tzshh5e996rf59t0ijl5i3nnjpa9jl942y8osokqb8g0jjohjvmdjoyq8zzykz2nbwh748aicxi4t1k6kpt9xlnn3mygcczhgb74ug5eth72qo0i2pv6r6ki3zhwtwumrbubk89k99c4mjf3oxj1c1uohvhfvc7cjsaugxo49uvmni8yyq23he1or8junietigjdh6zids5d84r1wbnwy29a44k68jhax2tez6740jvux1w95lqwc7sx1obh97qtxbja8s2prwggwwmkx5uj82ma5fqki4sk9550nfj8fhjlwqmjo4ysv599t1245aap4q57nxa3h72dkrfjnckcewimld17zav8ys3scm9dwiie9pusze0cc4b1q9jln5nuogeeecxv4d09m5skjvim2vt3t0vzb6cbrgomtrpr5vtjs2zclrlbbzw0dl5laqnk0q3axs0f8p5kj7szzze3676huszpl4qrtgd40k5buxdykx4im39dowcw8xpbl8a1g8h85z5tmkp9ibdks02nvlyilntzq8knqna00ja0564csloblj2a 00:08:37.496 19:28:30 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:08:37.496 19:28:30 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:08:37.496 19:28:30 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:08:37.496 19:28:30 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:08:37.754 [2024-12-05 19:28:30.970002] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:08:37.754 [2024-12-05 19:28:30.970320] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59969 ] 00:08:37.754 { 00:08:37.754 "subsystems": [ 00:08:37.754 { 00:08:37.754 "subsystem": "bdev", 00:08:37.754 "config": [ 00:08:37.754 { 00:08:37.754 "params": { 00:08:37.754 "trtype": "pcie", 00:08:37.754 "traddr": "0000:00:10.0", 00:08:37.754 "name": "Nvme0" 00:08:37.754 }, 00:08:37.754 "method": "bdev_nvme_attach_controller" 00:08:37.754 }, 00:08:37.754 { 00:08:37.754 "method": "bdev_wait_for_examine" 00:08:37.754 } 00:08:37.754 ] 00:08:37.754 } 00:08:37.754 ] 00:08:37.754 } 00:08:37.754 [2024-12-05 19:28:31.117836] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:37.754 [2024-12-05 19:28:31.163838] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.012 [2024-12-05 19:28:31.218387] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:38.012  [2024-12-05T19:28:31.706Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:08:38.269 00:08:38.269 19:28:31 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:08:38.269 19:28:31 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:08:38.269 19:28:31 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:08:38.269 19:28:31 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:08:38.269 [2024-12-05 19:28:31.578895] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:08:38.269 [2024-12-05 19:28:31.579003] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59988 ] 00:08:38.269 { 00:08:38.269 "subsystems": [ 00:08:38.269 { 00:08:38.269 "subsystem": "bdev", 00:08:38.269 "config": [ 00:08:38.269 { 00:08:38.269 "params": { 00:08:38.269 "trtype": "pcie", 00:08:38.269 "traddr": "0000:00:10.0", 00:08:38.269 "name": "Nvme0" 00:08:38.269 }, 00:08:38.269 "method": "bdev_nvme_attach_controller" 00:08:38.269 }, 00:08:38.269 { 00:08:38.269 "method": "bdev_wait_for_examine" 00:08:38.269 } 00:08:38.269 ] 00:08:38.269 } 00:08:38.269 ] 00:08:38.269 } 00:08:38.527 [2024-12-05 19:28:31.727406] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.527 [2024-12-05 19:28:31.782338] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.527 [2024-12-05 19:28:31.837255] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:38.527  [2024-12-05T19:28:32.223Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:08:38.786 00:08:38.786 19:28:32 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:08:38.786 ************************************ 00:08:38.786 END TEST dd_rw_offset 00:08:38.786 ************************************ 00:08:38.787 19:28:32 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ ghxjwf7yomrnq5gy1ia6cout9qrcmokhlo2h1xtbner15klk2jf8mdi2kfvcwj961jiiealtluqhzxybprqn50w0v8gq72m53dgbppuwt359offjhsamqxeja2e1l3h1znsk2qdgi1v14yb66dp5fdkv9n1e96qgqn1ex4qu4vtvoi8jy72kkoj8wl1wj7aa412k6hf0y23pa4lo27gwwx2nucgorqeu9e78lcpruynm0pr82n0bphj3prn4iwob2cq0qnm26n94hznyyqe7gxw4gruvnyhgnpazmyibpg1cc28p4nx86itryw9w5doq1ji2629c1145lfgwbphwboup6nplpirkicnfpgqf69lhht88cemyos8twlihjuwqnv69dv50c97opt97tcchki14y65mew9folpbshacieldk4svj45gfzns7jjmm7sveyskqh4k772b8mt7d0xodxinx48rba9ctoqgcqut5uthdo7hlzc2c5xfoj4umbfsq8lpzf439qjz2x6pjbd90163cg8c2cuvun4dzuz2n4x3kgf24tciccnir7h2mgjj5axzwdzo0061v2smaqk7m435bql0el23beucxhki8h50bfae6k2t2sxfrwuxb56g1tpuvylenaos69jxdo3r7msrc0u2w4vgplu1lhdtshu2hrmymkbir0y1ig975nutka1ozn7ulqukqw6did23qqp5u58yd1q7lctv4jo3apm0oytofm15rbw9lmdbskzrf8iynve1i257ksqvmeqn5t6mkrtf51j7yvxnjo7rf9r7cxqpy9i7to78pgjf2qlnczovb8p4cgo0ggy1fl0l14ergws22rfjj3cs5e5p339ku3e64p651831exsncqeokvxlqkh2v1hvjaseks53ukyapm190rzrizuuxvde91qlut90uo7psj2qh3lxauxm8qsuwdo21d0vup6fyb4inquyydqtrnty8jww5xl21j70asw488vhcjwwpponbosudzrf5p7t0cpp7kvbh93ekbopoqno8yozy7v4ekfwhlyb3t0sg9kcwl48qt1z1lkbjb8y0p8ihbkqso6lbksg24rbh4xsh4bqqodn2vlqeuc5px4rkfpbvm0umxharcg5rs15ydoct4f3zskzfgl5tclfj8ez1e1zekw9vo3cforl7t41rwd9m25cmijjltxq5yetbo85x2wthumj2ycglof4i2gqxoi9hgni4ty5argc1lyabt5c2cziyxeud2stbwom8q1otn972bazvx8v5mo92bj0m1v8ayuc39ulvyxz1jlebj7t3qbs4dns8suaaop4g8ca540a7ekr6j1emks3z5nvwqea2qa0wvk0ncgeg5mtmdbdw36ti746zzbni22qwhb6b22mj7u1qiiannhw77nzenppcw7sqyjryxlgqt09ipbjuww87flt0s2aupw9xg0hub4s6nzj00cr5bfnqdpubf1vb6ovveyoqz1ukwlblouv3x20vcuffc136fdn00n7w7zgccj1b35yc9yxfl8jzr880b1bwcrpn5m1upd4p9ryuw0r7fwemji2f9q2a0snct6lphpoox4dw4806refhlhcd5hy1d03a0h50unjczs1ev5key0gefmx5l5btvrbpdyxqdcp9g3bg3ts2zp0ds2d5gaoyu869580o19mpc9mqwdvg1n1mol1zmddo0yrax5qsos2s1ngn6xezi32eqpvkcthbnw7fzv717lu1lchl6yidz0zgph729c20jztmcfh6l37a3g8w039p0lotdf5q5lq0qxa5qq3i23hygfdpqr8oa7ehged4p8qy4exdiqiceipxielbscya0b48gvjetgvsob51rj5slx6bg280qucdskxkk1av3kqyotv4h66yk741o3z7x6rvwb0e4aasyx446fhka8nw4qd7qymrzvi31s1q9r16dm6jqy9hx6yokuedelfpmyeasd389ikc1erbt8en5lsxhz85ztvkadjoyc1684kh9aaz8u0ri7k2o9yduq2ad2bexpp85f94mnbqm8aijucofx25o7qfqjnxw7h476zolb5rhu1cdsqy0ccmtx85g5gfiw0dtcyarkbazfw2le9trpskreeg0nea5rlfrcgmsxvvs6bf96rav1ncn45q4tv37oimof0m5utqgoksbypijetgg1gptaj94c8o6tlnefzq067n2s87pt8mqfbrdx3pvlqkgmgftdvua1tz1qu870j0f16jhqq8nhbwen1h3i9xd5jdt6c45detv2xmo3xrshkwxvg5t1nha4udzqkyorwl057k210y85ekn7jeryokjzvxclayi65llagpe8nzvcyc5odidp0rhvo69hhj6ub2yc2plbj1e3ovhdf7m9bpvv8xyf2a0u89gkzz6sf7ltsqtve55solool3lap3a7tciaz5h74keiet29o4jg82jcvg6y5sqo2e1310pub0ej8tjvorxeidy07d54vg3ebuhx1w5chzrvt7uneyc1xk8lv4j289jqra3qxrj426c17ahu9dauanzpv7gehltmi7x6u6wcb22fhzyoc2jrqyjo04jmkk0bowvo7i2owhqg05a2d989i30kcbu03oualfmz2m0rjoni380wdd3na766ox0r3gg4xe0p2sh1u33rnwze5mkq5wvximsl216vu2buf7i1rpyic06uidn5yf5vzb9u60ys6n3y6ks0b1jw7bf1mu81pt10rfbdhaskia0n8qxpnhlivpwefexc7p14mzq7ekhuif6gocs1rbdc927wwcmb9kr262czr5j5hhxuvvyvjjacik456t9fdetqarw8h9m10rj6mrzpuiot3w4hmwohupln6t8rf9cv8je3t881exopvblwm6cxvxrbw8dsohws514iezf7bv35b9wa7mva9qdvv3aog995nw3nonj4pofvuekkzansnjhukf6g5vge2o08pv7m2s8ajug7hm4avfrdo2gs0se0i291ha8qowd6hfln0ymjsyysomctt7fll9bugn5f89br5wwt2b5i8mkjbv2st4igwbevvrajsd3r90u2uy0ws42j2q5snd26dv8nx8c1k96cfd9cl860ey1g4oohfdsfbg11q6hmj3pgmjyy6z4w2w1d9kb1gx09qbk1mwd1x2vvtlfzaf2rhbgqs8o6pqlfbznjj1cgqr1cyfrg42ry92or2idnmmtv1cplq33j5l2kuzhe8eeambxjpzj95p36yiqipje3hqltmk11vjfkie9dnuo17xl98cez6bheg72r7r6nljozyu7wmmnvxt22hpxg8nbfq3zeskntry216jy7kh1i6lkkh2nh0x9pza85lcluwj8eg8y3dlobis1glv3ik4dmdkqpz34lcihqvtymwb8ht4i0ew42omtwoppntr8fsxr1895pnfnh67y223d5rznpa6iu0ajksgzg3ohb3u1g5tzshh5e996rf59t0ijl5i3nnjpa9jl942y8osokqb8g0jjohjvmdjoyq8zzykz2nbwh748aicxi4t1k6kpt9xlnn3mygcczhgb74ug5eth72qo0i2pv6r6ki3zhwtwumrbubk89k99c4mjf3oxj1c1uohvhfvc7cjsaugxo49uvmni8yyq23he1or8junietigjdh6zids5d84r1wbnwy29a44k68jhax2tez6740jvux1w95lqwc7sx1obh97qtxbja8s2prwggwwmkx5uj82ma5fqki4sk9550nfj8fhjlwqmjo4ysv599t1245aap4q57nxa3h72dkrfjnckcewimld17zav8ys3scm9dwiie9pusze0cc4b1q9jln5nuogeeecxv4d09m5skjvim2vt3t0vzb6cbrgomtrpr5vtjs2zclrlbbzw0dl5laqnk0q3axs0f8p5kj7szzze3676huszpl4qrtgd40k5buxdykx4im39dowcw8xpbl8a1g8h85z5tmkp9ibdks02nvlyilntzq8knqna00ja0564csloblj2a == \g\h\x\j\w\f\7\y\o\m\r\n\q\5\g\y\1\i\a\6\c\o\u\t\9\q\r\c\m\o\k\h\l\o\2\h\1\x\t\b\n\e\r\1\5\k\l\k\2\j\f\8\m\d\i\2\k\f\v\c\w\j\9\6\1\j\i\i\e\a\l\t\l\u\q\h\z\x\y\b\p\r\q\n\5\0\w\0\v\8\g\q\7\2\m\5\3\d\g\b\p\p\u\w\t\3\5\9\o\f\f\j\h\s\a\m\q\x\e\j\a\2\e\1\l\3\h\1\z\n\s\k\2\q\d\g\i\1\v\1\4\y\b\6\6\d\p\5\f\d\k\v\9\n\1\e\9\6\q\g\q\n\1\e\x\4\q\u\4\v\t\v\o\i\8\j\y\7\2\k\k\o\j\8\w\l\1\w\j\7\a\a\4\1\2\k\6\h\f\0\y\2\3\p\a\4\l\o\2\7\g\w\w\x\2\n\u\c\g\o\r\q\e\u\9\e\7\8\l\c\p\r\u\y\n\m\0\p\r\8\2\n\0\b\p\h\j\3\p\r\n\4\i\w\o\b\2\c\q\0\q\n\m\2\6\n\9\4\h\z\n\y\y\q\e\7\g\x\w\4\g\r\u\v\n\y\h\g\n\p\a\z\m\y\i\b\p\g\1\c\c\2\8\p\4\n\x\8\6\i\t\r\y\w\9\w\5\d\o\q\1\j\i\2\6\2\9\c\1\1\4\5\l\f\g\w\b\p\h\w\b\o\u\p\6\n\p\l\p\i\r\k\i\c\n\f\p\g\q\f\6\9\l\h\h\t\8\8\c\e\m\y\o\s\8\t\w\l\i\h\j\u\w\q\n\v\6\9\d\v\5\0\c\9\7\o\p\t\9\7\t\c\c\h\k\i\1\4\y\6\5\m\e\w\9\f\o\l\p\b\s\h\a\c\i\e\l\d\k\4\s\v\j\4\5\g\f\z\n\s\7\j\j\m\m\7\s\v\e\y\s\k\q\h\4\k\7\7\2\b\8\m\t\7\d\0\x\o\d\x\i\n\x\4\8\r\b\a\9\c\t\o\q\g\c\q\u\t\5\u\t\h\d\o\7\h\l\z\c\2\c\5\x\f\o\j\4\u\m\b\f\s\q\8\l\p\z\f\4\3\9\q\j\z\2\x\6\p\j\b\d\9\0\1\6\3\c\g\8\c\2\c\u\v\u\n\4\d\z\u\z\2\n\4\x\3\k\g\f\2\4\t\c\i\c\c\n\i\r\7\h\2\m\g\j\j\5\a\x\z\w\d\z\o\0\0\6\1\v\2\s\m\a\q\k\7\m\4\3\5\b\q\l\0\e\l\2\3\b\e\u\c\x\h\k\i\8\h\5\0\b\f\a\e\6\k\2\t\2\s\x\f\r\w\u\x\b\5\6\g\1\t\p\u\v\y\l\e\n\a\o\s\6\9\j\x\d\o\3\r\7\m\s\r\c\0\u\2\w\4\v\g\p\l\u\1\l\h\d\t\s\h\u\2\h\r\m\y\m\k\b\i\r\0\y\1\i\g\9\7\5\n\u\t\k\a\1\o\z\n\7\u\l\q\u\k\q\w\6\d\i\d\2\3\q\q\p\5\u\5\8\y\d\1\q\7\l\c\t\v\4\j\o\3\a\p\m\0\o\y\t\o\f\m\1\5\r\b\w\9\l\m\d\b\s\k\z\r\f\8\i\y\n\v\e\1\i\2\5\7\k\s\q\v\m\e\q\n\5\t\6\m\k\r\t\f\5\1\j\7\y\v\x\n\j\o\7\r\f\9\r\7\c\x\q\p\y\9\i\7\t\o\7\8\p\g\j\f\2\q\l\n\c\z\o\v\b\8\p\4\c\g\o\0\g\g\y\1\f\l\0\l\1\4\e\r\g\w\s\2\2\r\f\j\j\3\c\s\5\e\5\p\3\3\9\k\u\3\e\6\4\p\6\5\1\8\3\1\e\x\s\n\c\q\e\o\k\v\x\l\q\k\h\2\v\1\h\v\j\a\s\e\k\s\5\3\u\k\y\a\p\m\1\9\0\r\z\r\i\z\u\u\x\v\d\e\9\1\q\l\u\t\9\0\u\o\7\p\s\j\2\q\h\3\l\x\a\u\x\m\8\q\s\u\w\d\o\2\1\d\0\v\u\p\6\f\y\b\4\i\n\q\u\y\y\d\q\t\r\n\t\y\8\j\w\w\5\x\l\2\1\j\7\0\a\s\w\4\8\8\v\h\c\j\w\w\p\p\o\n\b\o\s\u\d\z\r\f\5\p\7\t\0\c\p\p\7\k\v\b\h\9\3\e\k\b\o\p\o\q\n\o\8\y\o\z\y\7\v\4\e\k\f\w\h\l\y\b\3\t\0\s\g\9\k\c\w\l\4\8\q\t\1\z\1\l\k\b\j\b\8\y\0\p\8\i\h\b\k\q\s\o\6\l\b\k\s\g\2\4\r\b\h\4\x\s\h\4\b\q\q\o\d\n\2\v\l\q\e\u\c\5\p\x\4\r\k\f\p\b\v\m\0\u\m\x\h\a\r\c\g\5\r\s\1\5\y\d\o\c\t\4\f\3\z\s\k\z\f\g\l\5\t\c\l\f\j\8\e\z\1\e\1\z\e\k\w\9\v\o\3\c\f\o\r\l\7\t\4\1\r\w\d\9\m\2\5\c\m\i\j\j\l\t\x\q\5\y\e\t\b\o\8\5\x\2\w\t\h\u\m\j\2\y\c\g\l\o\f\4\i\2\g\q\x\o\i\9\h\g\n\i\4\t\y\5\a\r\g\c\1\l\y\a\b\t\5\c\2\c\z\i\y\x\e\u\d\2\s\t\b\w\o\m\8\q\1\o\t\n\9\7\2\b\a\z\v\x\8\v\5\m\o\9\2\b\j\0\m\1\v\8\a\y\u\c\3\9\u\l\v\y\x\z\1\j\l\e\b\j\7\t\3\q\b\s\4\d\n\s\8\s\u\a\a\o\p\4\g\8\c\a\5\4\0\a\7\e\k\r\6\j\1\e\m\k\s\3\z\5\n\v\w\q\e\a\2\q\a\0\w\v\k\0\n\c\g\e\g\5\m\t\m\d\b\d\w\3\6\t\i\7\4\6\z\z\b\n\i\2\2\q\w\h\b\6\b\2\2\m\j\7\u\1\q\i\i\a\n\n\h\w\7\7\n\z\e\n\p\p\c\w\7\s\q\y\j\r\y\x\l\g\q\t\0\9\i\p\b\j\u\w\w\8\7\f\l\t\0\s\2\a\u\p\w\9\x\g\0\h\u\b\4\s\6\n\z\j\0\0\c\r\5\b\f\n\q\d\p\u\b\f\1\v\b\6\o\v\v\e\y\o\q\z\1\u\k\w\l\b\l\o\u\v\3\x\2\0\v\c\u\f\f\c\1\3\6\f\d\n\0\0\n\7\w\7\z\g\c\c\j\1\b\3\5\y\c\9\y\x\f\l\8\j\z\r\8\8\0\b\1\b\w\c\r\p\n\5\m\1\u\p\d\4\p\9\r\y\u\w\0\r\7\f\w\e\m\j\i\2\f\9\q\2\a\0\s\n\c\t\6\l\p\h\p\o\o\x\4\d\w\4\8\0\6\r\e\f\h\l\h\c\d\5\h\y\1\d\0\3\a\0\h\5\0\u\n\j\c\z\s\1\e\v\5\k\e\y\0\g\e\f\m\x\5\l\5\b\t\v\r\b\p\d\y\x\q\d\c\p\9\g\3\b\g\3\t\s\2\z\p\0\d\s\2\d\5\g\a\o\y\u\8\6\9\5\8\0\o\1\9\m\p\c\9\m\q\w\d\v\g\1\n\1\m\o\l\1\z\m\d\d\o\0\y\r\a\x\5\q\s\o\s\2\s\1\n\g\n\6\x\e\z\i\3\2\e\q\p\v\k\c\t\h\b\n\w\7\f\z\v\7\1\7\l\u\1\l\c\h\l\6\y\i\d\z\0\z\g\p\h\7\2\9\c\2\0\j\z\t\m\c\f\h\6\l\3\7\a\3\g\8\w\0\3\9\p\0\l\o\t\d\f\5\q\5\l\q\0\q\x\a\5\q\q\3\i\2\3\h\y\g\f\d\p\q\r\8\o\a\7\e\h\g\e\d\4\p\8\q\y\4\e\x\d\i\q\i\c\e\i\p\x\i\e\l\b\s\c\y\a\0\b\4\8\g\v\j\e\t\g\v\s\o\b\5\1\r\j\5\s\l\x\6\b\g\2\8\0\q\u\c\d\s\k\x\k\k\1\a\v\3\k\q\y\o\t\v\4\h\6\6\y\k\7\4\1\o\3\z\7\x\6\r\v\w\b\0\e\4\a\a\s\y\x\4\4\6\f\h\k\a\8\n\w\4\q\d\7\q\y\m\r\z\v\i\3\1\s\1\q\9\r\1\6\d\m\6\j\q\y\9\h\x\6\y\o\k\u\e\d\e\l\f\p\m\y\e\a\s\d\3\8\9\i\k\c\1\e\r\b\t\8\e\n\5\l\s\x\h\z\8\5\z\t\v\k\a\d\j\o\y\c\1\6\8\4\k\h\9\a\a\z\8\u\0\r\i\7\k\2\o\9\y\d\u\q\2\a\d\2\b\e\x\p\p\8\5\f\9\4\m\n\b\q\m\8\a\i\j\u\c\o\f\x\2\5\o\7\q\f\q\j\n\x\w\7\h\4\7\6\z\o\l\b\5\r\h\u\1\c\d\s\q\y\0\c\c\m\t\x\8\5\g\5\g\f\i\w\0\d\t\c\y\a\r\k\b\a\z\f\w\2\l\e\9\t\r\p\s\k\r\e\e\g\0\n\e\a\5\r\l\f\r\c\g\m\s\x\v\v\s\6\b\f\9\6\r\a\v\1\n\c\n\4\5\q\4\t\v\3\7\o\i\m\o\f\0\m\5\u\t\q\g\o\k\s\b\y\p\i\j\e\t\g\g\1\g\p\t\a\j\9\4\c\8\o\6\t\l\n\e\f\z\q\0\6\7\n\2\s\8\7\p\t\8\m\q\f\b\r\d\x\3\p\v\l\q\k\g\m\g\f\t\d\v\u\a\1\t\z\1\q\u\8\7\0\j\0\f\1\6\j\h\q\q\8\n\h\b\w\e\n\1\h\3\i\9\x\d\5\j\d\t\6\c\4\5\d\e\t\v\2\x\m\o\3\x\r\s\h\k\w\x\v\g\5\t\1\n\h\a\4\u\d\z\q\k\y\o\r\w\l\0\5\7\k\2\1\0\y\8\5\e\k\n\7\j\e\r\y\o\k\j\z\v\x\c\l\a\y\i\6\5\l\l\a\g\p\e\8\n\z\v\c\y\c\5\o\d\i\d\p\0\r\h\v\o\6\9\h\h\j\6\u\b\2\y\c\2\p\l\b\j\1\e\3\o\v\h\d\f\7\m\9\b\p\v\v\8\x\y\f\2\a\0\u\8\9\g\k\z\z\6\s\f\7\l\t\s\q\t\v\e\5\5\s\o\l\o\o\l\3\l\a\p\3\a\7\t\c\i\a\z\5\h\7\4\k\e\i\e\t\2\9\o\4\j\g\8\2\j\c\v\g\6\y\5\s\q\o\2\e\1\3\1\0\p\u\b\0\e\j\8\t\j\v\o\r\x\e\i\d\y\0\7\d\5\4\v\g\3\e\b\u\h\x\1\w\5\c\h\z\r\v\t\7\u\n\e\y\c\1\x\k\8\l\v\4\j\2\8\9\j\q\r\a\3\q\x\r\j\4\2\6\c\1\7\a\h\u\9\d\a\u\a\n\z\p\v\7\g\e\h\l\t\m\i\7\x\6\u\6\w\c\b\2\2\f\h\z\y\o\c\2\j\r\q\y\j\o\0\4\j\m\k\k\0\b\o\w\v\o\7\i\2\o\w\h\q\g\0\5\a\2\d\9\8\9\i\3\0\k\c\b\u\0\3\o\u\a\l\f\m\z\2\m\0\r\j\o\n\i\3\8\0\w\d\d\3\n\a\7\6\6\o\x\0\r\3\g\g\4\x\e\0\p\2\s\h\1\u\3\3\r\n\w\z\e\5\m\k\q\5\w\v\x\i\m\s\l\2\1\6\v\u\2\b\u\f\7\i\1\r\p\y\i\c\0\6\u\i\d\n\5\y\f\5\v\z\b\9\u\6\0\y\s\6\n\3\y\6\k\s\0\b\1\j\w\7\b\f\1\m\u\8\1\p\t\1\0\r\f\b\d\h\a\s\k\i\a\0\n\8\q\x\p\n\h\l\i\v\p\w\e\f\e\x\c\7\p\1\4\m\z\q\7\e\k\h\u\i\f\6\g\o\c\s\1\r\b\d\c\9\2\7\w\w\c\m\b\9\k\r\2\6\2\c\z\r\5\j\5\h\h\x\u\v\v\y\v\j\j\a\c\i\k\4\5\6\t\9\f\d\e\t\q\a\r\w\8\h\9\m\1\0\r\j\6\m\r\z\p\u\i\o\t\3\w\4\h\m\w\o\h\u\p\l\n\6\t\8\r\f\9\c\v\8\j\e\3\t\8\8\1\e\x\o\p\v\b\l\w\m\6\c\x\v\x\r\b\w\8\d\s\o\h\w\s\5\1\4\i\e\z\f\7\b\v\3\5\b\9\w\a\7\m\v\a\9\q\d\v\v\3\a\o\g\9\9\5\n\w\3\n\o\n\j\4\p\o\f\v\u\e\k\k\z\a\n\s\n\j\h\u\k\f\6\g\5\v\g\e\2\o\0\8\p\v\7\m\2\s\8\a\j\u\g\7\h\m\4\a\v\f\r\d\o\2\g\s\0\s\e\0\i\2\9\1\h\a\8\q\o\w\d\6\h\f\l\n\0\y\m\j\s\y\y\s\o\m\c\t\t\7\f\l\l\9\b\u\g\n\5\f\8\9\b\r\5\w\w\t\2\b\5\i\8\m\k\j\b\v\2\s\t\4\i\g\w\b\e\v\v\r\a\j\s\d\3\r\9\0\u\2\u\y\0\w\s\4\2\j\2\q\5\s\n\d\2\6\d\v\8\n\x\8\c\1\k\9\6\c\f\d\9\c\l\8\6\0\e\y\1\g\4\o\o\h\f\d\s\f\b\g\1\1\q\6\h\m\j\3\p\g\m\j\y\y\6\z\4\w\2\w\1\d\9\k\b\1\g\x\0\9\q\b\k\1\m\w\d\1\x\2\v\v\t\l\f\z\a\f\2\r\h\b\g\q\s\8\o\6\p\q\l\f\b\z\n\j\j\1\c\g\q\r\1\c\y\f\r\g\4\2\r\y\9\2\o\r\2\i\d\n\m\m\t\v\1\c\p\l\q\3\3\j\5\l\2\k\u\z\h\e\8\e\e\a\m\b\x\j\p\z\j\9\5\p\3\6\y\i\q\i\p\j\e\3\h\q\l\t\m\k\1\1\v\j\f\k\i\e\9\d\n\u\o\1\7\x\l\9\8\c\e\z\6\b\h\e\g\7\2\r\7\r\6\n\l\j\o\z\y\u\7\w\m\m\n\v\x\t\2\2\h\p\x\g\8\n\b\f\q\3\z\e\s\k\n\t\r\y\2\1\6\j\y\7\k\h\1\i\6\l\k\k\h\2\n\h\0\x\9\p\z\a\8\5\l\c\l\u\w\j\8\e\g\8\y\3\d\l\o\b\i\s\1\g\l\v\3\i\k\4\d\m\d\k\q\p\z\3\4\l\c\i\h\q\v\t\y\m\w\b\8\h\t\4\i\0\e\w\4\2\o\m\t\w\o\p\p\n\t\r\8\f\s\x\r\1\8\9\5\p\n\f\n\h\6\7\y\2\2\3\d\5\r\z\n\p\a\6\i\u\0\a\j\k\s\g\z\g\3\o\h\b\3\u\1\g\5\t\z\s\h\h\5\e\9\9\6\r\f\5\9\t\0\i\j\l\5\i\3\n\n\j\p\a\9\j\l\9\4\2\y\8\o\s\o\k\q\b\8\g\0\j\j\o\h\j\v\m\d\j\o\y\q\8\z\z\y\k\z\2\n\b\w\h\7\4\8\a\i\c\x\i\4\t\1\k\6\k\p\t\9\x\l\n\n\3\m\y\g\c\c\z\h\g\b\7\4\u\g\5\e\t\h\7\2\q\o\0\i\2\p\v\6\r\6\k\i\3\z\h\w\t\w\u\m\r\b\u\b\k\8\9\k\9\9\c\4\m\j\f\3\o\x\j\1\c\1\u\o\h\v\h\f\v\c\7\c\j\s\a\u\g\x\o\4\9\u\v\m\n\i\8\y\y\q\2\3\h\e\1\o\r\8\j\u\n\i\e\t\i\g\j\d\h\6\z\i\d\s\5\d\8\4\r\1\w\b\n\w\y\2\9\a\4\4\k\6\8\j\h\a\x\2\t\e\z\6\7\4\0\j\v\u\x\1\w\9\5\l\q\w\c\7\s\x\1\o\b\h\9\7\q\t\x\b\j\a\8\s\2\p\r\w\g\g\w\w\m\k\x\5\u\j\8\2\m\a\5\f\q\k\i\4\s\k\9\5\5\0\n\f\j\8\f\h\j\l\w\q\m\j\o\4\y\s\v\5\9\9\t\1\2\4\5\a\a\p\4\q\5\7\n\x\a\3\h\7\2\d\k\r\f\j\n\c\k\c\e\w\i\m\l\d\1\7\z\a\v\8\y\s\3\s\c\m\9\d\w\i\i\e\9\p\u\s\z\e\0\c\c\4\b\1\q\9\j\l\n\5\n\u\o\g\e\e\e\c\x\v\4\d\0\9\m\5\s\k\j\v\i\m\2\v\t\3\t\0\v\z\b\6\c\b\r\g\o\m\t\r\p\r\5\v\t\j\s\2\z\c\l\r\l\b\b\z\w\0\d\l\5\l\a\q\n\k\0\q\3\a\x\s\0\f\8\p\5\k\j\7\s\z\z\z\e\3\6\7\6\h\u\s\z\p\l\4\q\r\t\g\d\4\0\k\5\b\u\x\d\y\k\x\4\i\m\3\9\d\o\w\c\w\8\x\p\b\l\8\a\1\g\8\h\8\5\z\5\t\m\k\p\9\i\b\d\k\s\0\2\n\v\l\y\i\l\n\t\z\q\8\k\n\q\n\a\0\0\j\a\0\5\6\4\c\s\l\o\b\l\j\2\a ]] 00:08:38.787 00:08:38.787 real 0m1.271s 00:08:38.787 user 0m0.859s 00:08:38.787 sys 0m0.609s 00:08:38.787 19:28:32 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:38.787 19:28:32 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:08:38.787 19:28:32 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:08:38.787 19:28:32 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:08:38.787 19:28:32 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:38.787 19:28:32 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:08:38.787 19:28:32 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:08:38.787 19:28:32 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:08:38.787 19:28:32 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:08:38.787 19:28:32 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:38.787 19:28:32 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:08:38.787 19:28:32 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:38.787 19:28:32 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:08:39.045 [2024-12-05 19:28:32.239997] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:08:39.045 [2024-12-05 19:28:32.240316] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60014 ] 00:08:39.045 { 00:08:39.045 "subsystems": [ 00:08:39.045 { 00:08:39.045 "subsystem": "bdev", 00:08:39.045 "config": [ 00:08:39.045 { 00:08:39.045 "params": { 00:08:39.045 "trtype": "pcie", 00:08:39.045 "traddr": "0000:00:10.0", 00:08:39.045 "name": "Nvme0" 00:08:39.045 }, 00:08:39.045 "method": "bdev_nvme_attach_controller" 00:08:39.045 }, 00:08:39.045 { 00:08:39.045 "method": "bdev_wait_for_examine" 00:08:39.045 } 00:08:39.045 ] 00:08:39.045 } 00:08:39.045 ] 00:08:39.045 } 00:08:39.045 [2024-12-05 19:28:32.388679] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.045 [2024-12-05 19:28:32.433854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.304 [2024-12-05 19:28:32.487283] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:39.304  [2024-12-05T19:28:32.999Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:39.562 00:08:39.562 19:28:32 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:39.562 ************************************ 00:08:39.562 END TEST spdk_dd_basic_rw 00:08:39.562 ************************************ 00:08:39.562 00:08:39.562 real 0m17.620s 00:08:39.562 user 0m12.573s 00:08:39.562 sys 0m6.826s 00:08:39.562 19:28:32 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:39.562 19:28:32 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:08:39.562 19:28:32 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:08:39.562 19:28:32 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:39.562 19:28:32 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:39.563 19:28:32 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:39.563 ************************************ 00:08:39.563 START TEST spdk_dd_posix 00:08:39.563 ************************************ 00:08:39.563 19:28:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:08:39.563 * Looking for test storage... 00:08:39.563 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:39.563 19:28:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:39.563 19:28:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1711 -- # lcov --version 00:08:39.563 19:28:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:39.822 19:28:33 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:39.822 19:28:33 spdk_dd.spdk_dd_posix -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:39.822 19:28:33 spdk_dd.spdk_dd_posix -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:39.822 19:28:33 spdk_dd.spdk_dd_posix -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:39.822 19:28:33 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # IFS=.-: 00:08:39.822 19:28:33 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # read -ra ver1 00:08:39.822 19:28:33 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # IFS=.-: 00:08:39.822 19:28:33 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # read -ra ver2 00:08:39.822 19:28:33 spdk_dd.spdk_dd_posix -- scripts/common.sh@338 -- # local 'op=<' 00:08:39.822 19:28:33 spdk_dd.spdk_dd_posix -- scripts/common.sh@340 -- # ver1_l=2 00:08:39.822 19:28:33 spdk_dd.spdk_dd_posix -- scripts/common.sh@341 -- # ver2_l=1 00:08:39.822 19:28:33 spdk_dd.spdk_dd_posix -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:39.822 19:28:33 spdk_dd.spdk_dd_posix -- scripts/common.sh@344 -- # case "$op" in 00:08:39.822 19:28:33 spdk_dd.spdk_dd_posix -- scripts/common.sh@345 -- # : 1 00:08:39.822 19:28:33 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:39.822 19:28:33 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:39.822 19:28:33 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # decimal 1 00:08:39.822 19:28:33 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=1 00:08:39.822 19:28:33 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:39.822 19:28:33 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 1 00:08:39.822 19:28:33 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # ver1[v]=1 00:08:39.822 19:28:33 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # decimal 2 00:08:39.822 19:28:33 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=2 00:08:39.822 19:28:33 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:39.822 19:28:33 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 2 00:08:39.822 19:28:33 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # ver2[v]=2 00:08:39.822 19:28:33 spdk_dd.spdk_dd_posix -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:39.822 19:28:33 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:39.822 19:28:33 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # return 0 00:08:39.822 19:28:33 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:39.822 19:28:33 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:39.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.822 --rc genhtml_branch_coverage=1 00:08:39.822 --rc genhtml_function_coverage=1 00:08:39.822 --rc genhtml_legend=1 00:08:39.822 --rc geninfo_all_blocks=1 00:08:39.822 --rc geninfo_unexecuted_blocks=1 00:08:39.822 00:08:39.822 ' 00:08:39.822 19:28:33 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:39.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.822 --rc genhtml_branch_coverage=1 00:08:39.822 --rc genhtml_function_coverage=1 00:08:39.822 --rc genhtml_legend=1 00:08:39.822 --rc geninfo_all_blocks=1 00:08:39.822 --rc geninfo_unexecuted_blocks=1 00:08:39.822 00:08:39.822 ' 00:08:39.822 19:28:33 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:39.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.822 --rc genhtml_branch_coverage=1 00:08:39.822 --rc genhtml_function_coverage=1 00:08:39.822 --rc genhtml_legend=1 00:08:39.822 --rc geninfo_all_blocks=1 00:08:39.822 --rc geninfo_unexecuted_blocks=1 00:08:39.822 00:08:39.822 ' 00:08:39.822 19:28:33 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:39.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.822 --rc genhtml_branch_coverage=1 00:08:39.822 --rc genhtml_function_coverage=1 00:08:39.822 --rc genhtml_legend=1 00:08:39.822 --rc geninfo_all_blocks=1 00:08:39.822 --rc geninfo_unexecuted_blocks=1 00:08:39.822 00:08:39.822 ' 00:08:39.822 19:28:33 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:39.822 19:28:33 spdk_dd.spdk_dd_posix -- scripts/common.sh@15 -- # shopt -s extglob 00:08:39.822 19:28:33 spdk_dd.spdk_dd_posix -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:39.822 19:28:33 spdk_dd.spdk_dd_posix -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:39.822 19:28:33 spdk_dd.spdk_dd_posix -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:39.822 19:28:33 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.822 19:28:33 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.822 19:28:33 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.822 19:28:33 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:08:39.823 19:28:33 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.823 19:28:33 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:08:39.823 19:28:33 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:08:39.823 19:28:33 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:08:39.823 19:28:33 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:08:39.823 19:28:33 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:39.823 19:28:33 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:39.823 19:28:33 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:08:39.823 19:28:33 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:08:39.823 * First test run, liburing in use 00:08:39.823 19:28:33 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:08:39.823 19:28:33 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:39.823 19:28:33 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:39.823 19:28:33 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:39.823 ************************************ 00:08:39.823 START TEST dd_flag_append 00:08:39.823 ************************************ 00:08:39.823 19:28:33 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1129 -- # append 00:08:39.823 19:28:33 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:08:39.823 19:28:33 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:08:39.823 19:28:33 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:08:39.823 19:28:33 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:08:39.823 19:28:33 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:08:39.823 19:28:33 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=5c11cuae120pnbicyk1wsxy132ysz06i 00:08:39.823 19:28:33 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:08:39.823 19:28:33 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:08:39.823 19:28:33 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:08:39.823 19:28:33 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=c8gm9736r9t85q7aln8ejk626fmibemm 00:08:39.823 19:28:33 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s 5c11cuae120pnbicyk1wsxy132ysz06i 00:08:39.823 19:28:33 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s c8gm9736r9t85q7aln8ejk626fmibemm 00:08:39.823 19:28:33 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:08:39.823 [2024-12-05 19:28:33.149749] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:08:39.823 [2024-12-05 19:28:33.149909] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60086 ] 00:08:40.084 [2024-12-05 19:28:33.306891] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.084 [2024-12-05 19:28:33.364681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.084 [2024-12-05 19:28:33.420488] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:40.084  [2024-12-05T19:28:33.803Z] Copying: 32/32 [B] (average 31 kBps) 00:08:40.366 00:08:40.366 ************************************ 00:08:40.366 END TEST dd_flag_append 00:08:40.366 ************************************ 00:08:40.366 19:28:33 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ c8gm9736r9t85q7aln8ejk626fmibemm5c11cuae120pnbicyk1wsxy132ysz06i == \c\8\g\m\9\7\3\6\r\9\t\8\5\q\7\a\l\n\8\e\j\k\6\2\6\f\m\i\b\e\m\m\5\c\1\1\c\u\a\e\1\2\0\p\n\b\i\c\y\k\1\w\s\x\y\1\3\2\y\s\z\0\6\i ]] 00:08:40.366 00:08:40.366 real 0m0.583s 00:08:40.366 user 0m0.317s 00:08:40.366 sys 0m0.288s 00:08:40.366 19:28:33 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:40.366 19:28:33 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:08:40.366 19:28:33 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:08:40.366 19:28:33 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:40.366 19:28:33 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:40.366 19:28:33 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:40.366 ************************************ 00:08:40.366 START TEST dd_flag_directory 00:08:40.366 ************************************ 00:08:40.366 19:28:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1129 -- # directory 00:08:40.366 19:28:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:40.366 19:28:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # local es=0 00:08:40.366 19:28:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:40.366 19:28:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:40.366 19:28:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:40.366 19:28:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:40.366 19:28:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:40.366 19:28:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:40.366 19:28:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:40.366 19:28:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:40.366 19:28:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:40.366 19:28:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:40.366 [2024-12-05 19:28:33.748146] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:08:40.366 [2024-12-05 19:28:33.748249] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60120 ] 00:08:40.625 [2024-12-05 19:28:33.892466] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.625 [2024-12-05 19:28:33.945642] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.625 [2024-12-05 19:28:34.001377] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:40.625 [2024-12-05 19:28:34.040106] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:40.625 [2024-12-05 19:28:34.040194] spdk_dd.c:1081:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:40.625 [2024-12-05 19:28:34.040229] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:40.885 [2024-12-05 19:28:34.157551] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:08:40.885 19:28:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # es=236 00:08:40.885 19:28:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:40.885 19:28:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@664 -- # es=108 00:08:40.885 19:28:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@665 -- # case "$es" in 00:08:40.885 19:28:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@672 -- # es=1 00:08:40.885 19:28:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:40.885 19:28:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:40.885 19:28:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # local es=0 00:08:40.885 19:28:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:40.885 19:28:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:40.885 19:28:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:40.885 19:28:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:40.885 19:28:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:40.885 19:28:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:40.885 19:28:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:40.885 19:28:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:40.885 19:28:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:40.885 19:28:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:40.885 [2024-12-05 19:28:34.284081] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:08:40.885 [2024-12-05 19:28:34.284420] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60124 ] 00:08:41.144 [2024-12-05 19:28:34.433219] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.144 [2024-12-05 19:28:34.480365] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.144 [2024-12-05 19:28:34.533282] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:41.144 [2024-12-05 19:28:34.569181] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:41.144 [2024-12-05 19:28:34.569237] spdk_dd.c:1130:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:41.144 [2024-12-05 19:28:34.569271] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:41.403 [2024-12-05 19:28:34.682701] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:08:41.403 ************************************ 00:08:41.403 END TEST dd_flag_directory 00:08:41.403 ************************************ 00:08:41.403 19:28:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # es=236 00:08:41.403 19:28:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:41.403 19:28:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@664 -- # es=108 00:08:41.403 19:28:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@665 -- # case "$es" in 00:08:41.403 19:28:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@672 -- # es=1 00:08:41.403 19:28:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:41.403 00:08:41.403 real 0m1.042s 00:08:41.403 user 0m0.551s 00:08:41.403 sys 0m0.280s 00:08:41.403 19:28:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:41.403 19:28:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:08:41.403 19:28:34 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:08:41.403 19:28:34 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:41.403 19:28:34 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:41.403 19:28:34 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:41.403 ************************************ 00:08:41.403 START TEST dd_flag_nofollow 00:08:41.403 ************************************ 00:08:41.403 19:28:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1129 -- # nofollow 00:08:41.403 19:28:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:41.403 19:28:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:41.403 19:28:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:41.403 19:28:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:41.403 19:28:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:41.403 19:28:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # local es=0 00:08:41.403 19:28:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:41.403 19:28:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:41.403 19:28:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:41.403 19:28:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:41.403 19:28:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:41.403 19:28:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:41.403 19:28:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:41.403 19:28:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:41.403 19:28:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:41.403 19:28:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:41.662 [2024-12-05 19:28:34.855043] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:08:41.662 [2024-12-05 19:28:34.855150] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60158 ] 00:08:41.662 [2024-12-05 19:28:35.000611] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.662 [2024-12-05 19:28:35.043284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.662 [2024-12-05 19:28:35.096002] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:41.920 [2024-12-05 19:28:35.129946] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:08:41.920 [2024-12-05 19:28:35.130002] spdk_dd.c:1081:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:08:41.920 [2024-12-05 19:28:35.130037] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:41.920 [2024-12-05 19:28:35.247611] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:08:41.920 19:28:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # es=216 00:08:41.920 19:28:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:41.920 19:28:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@664 -- # es=88 00:08:41.920 19:28:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@665 -- # case "$es" in 00:08:41.920 19:28:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@672 -- # es=1 00:08:41.920 19:28:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:41.920 19:28:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:41.920 19:28:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # local es=0 00:08:41.920 19:28:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:41.920 19:28:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:41.920 19:28:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:41.920 19:28:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:41.920 19:28:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:41.920 19:28:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:41.920 19:28:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:41.920 19:28:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:41.920 19:28:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:41.920 19:28:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:42.178 [2024-12-05 19:28:35.364371] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:08:42.178 [2024-12-05 19:28:35.364462] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60166 ] 00:08:42.178 [2024-12-05 19:28:35.506839] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:42.178 [2024-12-05 19:28:35.548103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.178 [2024-12-05 19:28:35.599345] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:42.437 [2024-12-05 19:28:35.634429] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:08:42.437 [2024-12-05 19:28:35.634774] spdk_dd.c:1130:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:08:42.437 [2024-12-05 19:28:35.634802] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:42.437 [2024-12-05 19:28:35.751592] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:08:42.437 19:28:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # es=216 00:08:42.437 19:28:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:42.437 19:28:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@664 -- # es=88 00:08:42.437 19:28:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@665 -- # case "$es" in 00:08:42.437 19:28:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@672 -- # es=1 00:08:42.437 19:28:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:42.437 19:28:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:08:42.437 19:28:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:08:42.437 19:28:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:08:42.437 19:28:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:42.696 [2024-12-05 19:28:35.892412] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:08:42.696 [2024-12-05 19:28:35.892718] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60176 ] 00:08:42.696 [2024-12-05 19:28:36.042728] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:42.696 [2024-12-05 19:28:36.100664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.955 [2024-12-05 19:28:36.162095] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:42.955  [2024-12-05T19:28:36.651Z] Copying: 512/512 [B] (average 500 kBps) 00:08:43.214 00:08:43.214 19:28:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ zmg7o1lmfounl5i3pabc3ulwj6izuf0g9lf8e5tw5kl6gb6rmwawyxk5fzfjg90vzxi0c1sy7zewm253xbl2g9dhjl1b0gg7ikks26owh0z2xyx6uonzgntegnic41u575ar0l82nade2uyszkb646jgz41hxzp5x2d50cbdoknyx0hve1dzl32hccsu5v5cacvq3zvrdteion3pxm4op0qxbbk5hihmdh4qpnqmu6ld8ws56mkl8gu6ztx6bdhw6j0vxtppoc61fajmvbmjomc8r2ffyasbo1lnoqlpbjaeugk45d8q0ysr3veqlird1wdamiv8ty1ghw0rj8fji3o895hc0phm8m3e4q4362ervrb8649nh4mnswrep5dhh5dcimcx5anwavc2v5kqhrqmq4ib59eil3l36sjjt6kntdz25ptsw5p5ojxhgdlu5c75efr0dfq7xdhoh2ccr6qldqmn5fybqvghq5bh6s853qmtws8gv3iz6gxnb2bh == \z\m\g\7\o\1\l\m\f\o\u\n\l\5\i\3\p\a\b\c\3\u\l\w\j\6\i\z\u\f\0\g\9\l\f\8\e\5\t\w\5\k\l\6\g\b\6\r\m\w\a\w\y\x\k\5\f\z\f\j\g\9\0\v\z\x\i\0\c\1\s\y\7\z\e\w\m\2\5\3\x\b\l\2\g\9\d\h\j\l\1\b\0\g\g\7\i\k\k\s\2\6\o\w\h\0\z\2\x\y\x\6\u\o\n\z\g\n\t\e\g\n\i\c\4\1\u\5\7\5\a\r\0\l\8\2\n\a\d\e\2\u\y\s\z\k\b\6\4\6\j\g\z\4\1\h\x\z\p\5\x\2\d\5\0\c\b\d\o\k\n\y\x\0\h\v\e\1\d\z\l\3\2\h\c\c\s\u\5\v\5\c\a\c\v\q\3\z\v\r\d\t\e\i\o\n\3\p\x\m\4\o\p\0\q\x\b\b\k\5\h\i\h\m\d\h\4\q\p\n\q\m\u\6\l\d\8\w\s\5\6\m\k\l\8\g\u\6\z\t\x\6\b\d\h\w\6\j\0\v\x\t\p\p\o\c\6\1\f\a\j\m\v\b\m\j\o\m\c\8\r\2\f\f\y\a\s\b\o\1\l\n\o\q\l\p\b\j\a\e\u\g\k\4\5\d\8\q\0\y\s\r\3\v\e\q\l\i\r\d\1\w\d\a\m\i\v\8\t\y\1\g\h\w\0\r\j\8\f\j\i\3\o\8\9\5\h\c\0\p\h\m\8\m\3\e\4\q\4\3\6\2\e\r\v\r\b\8\6\4\9\n\h\4\m\n\s\w\r\e\p\5\d\h\h\5\d\c\i\m\c\x\5\a\n\w\a\v\c\2\v\5\k\q\h\r\q\m\q\4\i\b\5\9\e\i\l\3\l\3\6\s\j\j\t\6\k\n\t\d\z\2\5\p\t\s\w\5\p\5\o\j\x\h\g\d\l\u\5\c\7\5\e\f\r\0\d\f\q\7\x\d\h\o\h\2\c\c\r\6\q\l\d\q\m\n\5\f\y\b\q\v\g\h\q\5\b\h\6\s\8\5\3\q\m\t\w\s\8\g\v\3\i\z\6\g\x\n\b\2\b\h ]] 00:08:43.214 00:08:43.214 real 0m1.620s 00:08:43.214 user 0m0.853s 00:08:43.214 sys 0m0.596s 00:08:43.214 19:28:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:43.214 ************************************ 00:08:43.214 END TEST dd_flag_nofollow 00:08:43.214 ************************************ 00:08:43.214 19:28:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:08:43.214 19:28:36 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:08:43.214 19:28:36 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:43.214 19:28:36 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:43.214 19:28:36 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:43.214 ************************************ 00:08:43.214 START TEST dd_flag_noatime 00:08:43.214 ************************************ 00:08:43.214 19:28:36 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1129 -- # noatime 00:08:43.214 19:28:36 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:08:43.214 19:28:36 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:08:43.214 19:28:36 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:08:43.214 19:28:36 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:08:43.214 19:28:36 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:08:43.214 19:28:36 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:43.214 19:28:36 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1733426916 00:08:43.214 19:28:36 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:43.214 19:28:36 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1733426916 00:08:43.214 19:28:36 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:08:44.151 19:28:37 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:44.151 [2024-12-05 19:28:37.544878] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:08:44.151 [2024-12-05 19:28:37.544993] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60219 ] 00:08:44.410 [2024-12-05 19:28:37.687057] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:44.410 [2024-12-05 19:28:37.748138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.410 [2024-12-05 19:28:37.815347] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:44.684  [2024-12-05T19:28:38.121Z] Copying: 512/512 [B] (average 500 kBps) 00:08:44.684 00:08:44.684 19:28:38 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:44.684 19:28:38 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1733426916 )) 00:08:44.684 19:28:38 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:44.684 19:28:38 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1733426916 )) 00:08:44.684 19:28:38 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:44.684 [2024-12-05 19:28:38.101371] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:08:44.684 [2024-12-05 19:28:38.101476] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60232 ] 00:08:44.973 [2024-12-05 19:28:38.248458] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:44.973 [2024-12-05 19:28:38.300698] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.973 [2024-12-05 19:28:38.353062] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:44.973  [2024-12-05T19:28:38.669Z] Copying: 512/512 [B] (average 500 kBps) 00:08:45.232 00:08:45.232 19:28:38 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:45.232 ************************************ 00:08:45.232 END TEST dd_flag_noatime 00:08:45.232 ************************************ 00:08:45.232 19:28:38 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1733426918 )) 00:08:45.232 00:08:45.232 real 0m2.109s 00:08:45.232 user 0m0.585s 00:08:45.232 sys 0m0.579s 00:08:45.232 19:28:38 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:45.232 19:28:38 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:08:45.232 19:28:38 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:08:45.232 19:28:38 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:45.232 19:28:38 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:45.232 19:28:38 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:45.232 ************************************ 00:08:45.232 START TEST dd_flags_misc 00:08:45.232 ************************************ 00:08:45.232 19:28:38 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1129 -- # io 00:08:45.232 19:28:38 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:08:45.232 19:28:38 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:08:45.232 19:28:38 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:08:45.232 19:28:38 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:45.232 19:28:38 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:08:45.232 19:28:38 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:08:45.232 19:28:38 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:08:45.232 19:28:38 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:45.232 19:28:38 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:45.491 [2024-12-05 19:28:38.681064] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:08:45.491 [2024-12-05 19:28:38.681384] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60266 ] 00:08:45.491 [2024-12-05 19:28:38.827362] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:45.491 [2024-12-05 19:28:38.883060] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.750 [2024-12-05 19:28:38.941827] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:45.750  [2024-12-05T19:28:39.187Z] Copying: 512/512 [B] (average 500 kBps) 00:08:45.750 00:08:45.751 19:28:39 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ sdwaj9e068fp3be88596mpta70cwkr9mrngm1slmjmkox7ifuj1s70on1ezod6gp0svpt0ct80brx0xqabj85cqylq03tmjqf7s1yeick39lfsttqukvcbzvjtz9bpzs38bn5r4wo4jhtdg0zbdff8gijpa8gz640mxb7ure4uuianbnmaydoh4zfc9w4am6ge10d7yeyzu0c1vfpi9gf5id3z31yrp2tqcyehhxieclg83tn54r759bm8ippfvvobs9xzd6myyo2unaerkixvuhhafenw1iib3i8n0wiq54cxp47sdkzkitytrqp8qz5od7g58fcdyzgdsb2qobq14f869krbvk43s0dw3evrqiwkcnr383uakegwqb5phg40glnjyp36cpmbpp9d52r5ih12e23t6bfpyydgeu3516xrbjvod97vodw7ta71r70x66hh3bnx7cf658k7vz5xt114itvef0xg503fb7rwulcfcup5qnhj8r7c6yrhqb == \s\d\w\a\j\9\e\0\6\8\f\p\3\b\e\8\8\5\9\6\m\p\t\a\7\0\c\w\k\r\9\m\r\n\g\m\1\s\l\m\j\m\k\o\x\7\i\f\u\j\1\s\7\0\o\n\1\e\z\o\d\6\g\p\0\s\v\p\t\0\c\t\8\0\b\r\x\0\x\q\a\b\j\8\5\c\q\y\l\q\0\3\t\m\j\q\f\7\s\1\y\e\i\c\k\3\9\l\f\s\t\t\q\u\k\v\c\b\z\v\j\t\z\9\b\p\z\s\3\8\b\n\5\r\4\w\o\4\j\h\t\d\g\0\z\b\d\f\f\8\g\i\j\p\a\8\g\z\6\4\0\m\x\b\7\u\r\e\4\u\u\i\a\n\b\n\m\a\y\d\o\h\4\z\f\c\9\w\4\a\m\6\g\e\1\0\d\7\y\e\y\z\u\0\c\1\v\f\p\i\9\g\f\5\i\d\3\z\3\1\y\r\p\2\t\q\c\y\e\h\h\x\i\e\c\l\g\8\3\t\n\5\4\r\7\5\9\b\m\8\i\p\p\f\v\v\o\b\s\9\x\z\d\6\m\y\y\o\2\u\n\a\e\r\k\i\x\v\u\h\h\a\f\e\n\w\1\i\i\b\3\i\8\n\0\w\i\q\5\4\c\x\p\4\7\s\d\k\z\k\i\t\y\t\r\q\p\8\q\z\5\o\d\7\g\5\8\f\c\d\y\z\g\d\s\b\2\q\o\b\q\1\4\f\8\6\9\k\r\b\v\k\4\3\s\0\d\w\3\e\v\r\q\i\w\k\c\n\r\3\8\3\u\a\k\e\g\w\q\b\5\p\h\g\4\0\g\l\n\j\y\p\3\6\c\p\m\b\p\p\9\d\5\2\r\5\i\h\1\2\e\2\3\t\6\b\f\p\y\y\d\g\e\u\3\5\1\6\x\r\b\j\v\o\d\9\7\v\o\d\w\7\t\a\7\1\r\7\0\x\6\6\h\h\3\b\n\x\7\c\f\6\5\8\k\7\v\z\5\x\t\1\1\4\i\t\v\e\f\0\x\g\5\0\3\f\b\7\r\w\u\l\c\f\c\u\p\5\q\n\h\j\8\r\7\c\6\y\r\h\q\b ]] 00:08:45.751 19:28:39 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:45.751 19:28:39 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:08:46.010 [2024-12-05 19:28:39.226722] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:08:46.010 [2024-12-05 19:28:39.226986] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60270 ] 00:08:46.010 [2024-12-05 19:28:39.375795] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.010 [2024-12-05 19:28:39.434976] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.268 [2024-12-05 19:28:39.494963] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:46.268  [2024-12-05T19:28:39.965Z] Copying: 512/512 [B] (average 500 kBps) 00:08:46.528 00:08:46.528 19:28:39 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ sdwaj9e068fp3be88596mpta70cwkr9mrngm1slmjmkox7ifuj1s70on1ezod6gp0svpt0ct80brx0xqabj85cqylq03tmjqf7s1yeick39lfsttqukvcbzvjtz9bpzs38bn5r4wo4jhtdg0zbdff8gijpa8gz640mxb7ure4uuianbnmaydoh4zfc9w4am6ge10d7yeyzu0c1vfpi9gf5id3z31yrp2tqcyehhxieclg83tn54r759bm8ippfvvobs9xzd6myyo2unaerkixvuhhafenw1iib3i8n0wiq54cxp47sdkzkitytrqp8qz5od7g58fcdyzgdsb2qobq14f869krbvk43s0dw3evrqiwkcnr383uakegwqb5phg40glnjyp36cpmbpp9d52r5ih12e23t6bfpyydgeu3516xrbjvod97vodw7ta71r70x66hh3bnx7cf658k7vz5xt114itvef0xg503fb7rwulcfcup5qnhj8r7c6yrhqb == \s\d\w\a\j\9\e\0\6\8\f\p\3\b\e\8\8\5\9\6\m\p\t\a\7\0\c\w\k\r\9\m\r\n\g\m\1\s\l\m\j\m\k\o\x\7\i\f\u\j\1\s\7\0\o\n\1\e\z\o\d\6\g\p\0\s\v\p\t\0\c\t\8\0\b\r\x\0\x\q\a\b\j\8\5\c\q\y\l\q\0\3\t\m\j\q\f\7\s\1\y\e\i\c\k\3\9\l\f\s\t\t\q\u\k\v\c\b\z\v\j\t\z\9\b\p\z\s\3\8\b\n\5\r\4\w\o\4\j\h\t\d\g\0\z\b\d\f\f\8\g\i\j\p\a\8\g\z\6\4\0\m\x\b\7\u\r\e\4\u\u\i\a\n\b\n\m\a\y\d\o\h\4\z\f\c\9\w\4\a\m\6\g\e\1\0\d\7\y\e\y\z\u\0\c\1\v\f\p\i\9\g\f\5\i\d\3\z\3\1\y\r\p\2\t\q\c\y\e\h\h\x\i\e\c\l\g\8\3\t\n\5\4\r\7\5\9\b\m\8\i\p\p\f\v\v\o\b\s\9\x\z\d\6\m\y\y\o\2\u\n\a\e\r\k\i\x\v\u\h\h\a\f\e\n\w\1\i\i\b\3\i\8\n\0\w\i\q\5\4\c\x\p\4\7\s\d\k\z\k\i\t\y\t\r\q\p\8\q\z\5\o\d\7\g\5\8\f\c\d\y\z\g\d\s\b\2\q\o\b\q\1\4\f\8\6\9\k\r\b\v\k\4\3\s\0\d\w\3\e\v\r\q\i\w\k\c\n\r\3\8\3\u\a\k\e\g\w\q\b\5\p\h\g\4\0\g\l\n\j\y\p\3\6\c\p\m\b\p\p\9\d\5\2\r\5\i\h\1\2\e\2\3\t\6\b\f\p\y\y\d\g\e\u\3\5\1\6\x\r\b\j\v\o\d\9\7\v\o\d\w\7\t\a\7\1\r\7\0\x\6\6\h\h\3\b\n\x\7\c\f\6\5\8\k\7\v\z\5\x\t\1\1\4\i\t\v\e\f\0\x\g\5\0\3\f\b\7\r\w\u\l\c\f\c\u\p\5\q\n\h\j\8\r\7\c\6\y\r\h\q\b ]] 00:08:46.528 19:28:39 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:46.528 19:28:39 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:08:46.528 [2024-12-05 19:28:39.775756] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:08:46.528 [2024-12-05 19:28:39.775893] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60285 ] 00:08:46.528 [2024-12-05 19:28:39.919960] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.786 [2024-12-05 19:28:39.973406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.786 [2024-12-05 19:28:40.027728] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:46.786  [2024-12-05T19:28:40.482Z] Copying: 512/512 [B] (average 125 kBps) 00:08:47.045 00:08:47.045 19:28:40 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ sdwaj9e068fp3be88596mpta70cwkr9mrngm1slmjmkox7ifuj1s70on1ezod6gp0svpt0ct80brx0xqabj85cqylq03tmjqf7s1yeick39lfsttqukvcbzvjtz9bpzs38bn5r4wo4jhtdg0zbdff8gijpa8gz640mxb7ure4uuianbnmaydoh4zfc9w4am6ge10d7yeyzu0c1vfpi9gf5id3z31yrp2tqcyehhxieclg83tn54r759bm8ippfvvobs9xzd6myyo2unaerkixvuhhafenw1iib3i8n0wiq54cxp47sdkzkitytrqp8qz5od7g58fcdyzgdsb2qobq14f869krbvk43s0dw3evrqiwkcnr383uakegwqb5phg40glnjyp36cpmbpp9d52r5ih12e23t6bfpyydgeu3516xrbjvod97vodw7ta71r70x66hh3bnx7cf658k7vz5xt114itvef0xg503fb7rwulcfcup5qnhj8r7c6yrhqb == \s\d\w\a\j\9\e\0\6\8\f\p\3\b\e\8\8\5\9\6\m\p\t\a\7\0\c\w\k\r\9\m\r\n\g\m\1\s\l\m\j\m\k\o\x\7\i\f\u\j\1\s\7\0\o\n\1\e\z\o\d\6\g\p\0\s\v\p\t\0\c\t\8\0\b\r\x\0\x\q\a\b\j\8\5\c\q\y\l\q\0\3\t\m\j\q\f\7\s\1\y\e\i\c\k\3\9\l\f\s\t\t\q\u\k\v\c\b\z\v\j\t\z\9\b\p\z\s\3\8\b\n\5\r\4\w\o\4\j\h\t\d\g\0\z\b\d\f\f\8\g\i\j\p\a\8\g\z\6\4\0\m\x\b\7\u\r\e\4\u\u\i\a\n\b\n\m\a\y\d\o\h\4\z\f\c\9\w\4\a\m\6\g\e\1\0\d\7\y\e\y\z\u\0\c\1\v\f\p\i\9\g\f\5\i\d\3\z\3\1\y\r\p\2\t\q\c\y\e\h\h\x\i\e\c\l\g\8\3\t\n\5\4\r\7\5\9\b\m\8\i\p\p\f\v\v\o\b\s\9\x\z\d\6\m\y\y\o\2\u\n\a\e\r\k\i\x\v\u\h\h\a\f\e\n\w\1\i\i\b\3\i\8\n\0\w\i\q\5\4\c\x\p\4\7\s\d\k\z\k\i\t\y\t\r\q\p\8\q\z\5\o\d\7\g\5\8\f\c\d\y\z\g\d\s\b\2\q\o\b\q\1\4\f\8\6\9\k\r\b\v\k\4\3\s\0\d\w\3\e\v\r\q\i\w\k\c\n\r\3\8\3\u\a\k\e\g\w\q\b\5\p\h\g\4\0\g\l\n\j\y\p\3\6\c\p\m\b\p\p\9\d\5\2\r\5\i\h\1\2\e\2\3\t\6\b\f\p\y\y\d\g\e\u\3\5\1\6\x\r\b\j\v\o\d\9\7\v\o\d\w\7\t\a\7\1\r\7\0\x\6\6\h\h\3\b\n\x\7\c\f\6\5\8\k\7\v\z\5\x\t\1\1\4\i\t\v\e\f\0\x\g\5\0\3\f\b\7\r\w\u\l\c\f\c\u\p\5\q\n\h\j\8\r\7\c\6\y\r\h\q\b ]] 00:08:47.045 19:28:40 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:47.045 19:28:40 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:47.045 [2024-12-05 19:28:40.315299] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:08:47.045 [2024-12-05 19:28:40.315394] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60289 ] 00:08:47.045 [2024-12-05 19:28:40.460481] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.305 [2024-12-05 19:28:40.513140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.305 [2024-12-05 19:28:40.567336] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:47.305  [2024-12-05T19:28:41.001Z] Copying: 512/512 [B] (average 250 kBps) 00:08:47.564 00:08:47.564 19:28:40 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ sdwaj9e068fp3be88596mpta70cwkr9mrngm1slmjmkox7ifuj1s70on1ezod6gp0svpt0ct80brx0xqabj85cqylq03tmjqf7s1yeick39lfsttqukvcbzvjtz9bpzs38bn5r4wo4jhtdg0zbdff8gijpa8gz640mxb7ure4uuianbnmaydoh4zfc9w4am6ge10d7yeyzu0c1vfpi9gf5id3z31yrp2tqcyehhxieclg83tn54r759bm8ippfvvobs9xzd6myyo2unaerkixvuhhafenw1iib3i8n0wiq54cxp47sdkzkitytrqp8qz5od7g58fcdyzgdsb2qobq14f869krbvk43s0dw3evrqiwkcnr383uakegwqb5phg40glnjyp36cpmbpp9d52r5ih12e23t6bfpyydgeu3516xrbjvod97vodw7ta71r70x66hh3bnx7cf658k7vz5xt114itvef0xg503fb7rwulcfcup5qnhj8r7c6yrhqb == \s\d\w\a\j\9\e\0\6\8\f\p\3\b\e\8\8\5\9\6\m\p\t\a\7\0\c\w\k\r\9\m\r\n\g\m\1\s\l\m\j\m\k\o\x\7\i\f\u\j\1\s\7\0\o\n\1\e\z\o\d\6\g\p\0\s\v\p\t\0\c\t\8\0\b\r\x\0\x\q\a\b\j\8\5\c\q\y\l\q\0\3\t\m\j\q\f\7\s\1\y\e\i\c\k\3\9\l\f\s\t\t\q\u\k\v\c\b\z\v\j\t\z\9\b\p\z\s\3\8\b\n\5\r\4\w\o\4\j\h\t\d\g\0\z\b\d\f\f\8\g\i\j\p\a\8\g\z\6\4\0\m\x\b\7\u\r\e\4\u\u\i\a\n\b\n\m\a\y\d\o\h\4\z\f\c\9\w\4\a\m\6\g\e\1\0\d\7\y\e\y\z\u\0\c\1\v\f\p\i\9\g\f\5\i\d\3\z\3\1\y\r\p\2\t\q\c\y\e\h\h\x\i\e\c\l\g\8\3\t\n\5\4\r\7\5\9\b\m\8\i\p\p\f\v\v\o\b\s\9\x\z\d\6\m\y\y\o\2\u\n\a\e\r\k\i\x\v\u\h\h\a\f\e\n\w\1\i\i\b\3\i\8\n\0\w\i\q\5\4\c\x\p\4\7\s\d\k\z\k\i\t\y\t\r\q\p\8\q\z\5\o\d\7\g\5\8\f\c\d\y\z\g\d\s\b\2\q\o\b\q\1\4\f\8\6\9\k\r\b\v\k\4\3\s\0\d\w\3\e\v\r\q\i\w\k\c\n\r\3\8\3\u\a\k\e\g\w\q\b\5\p\h\g\4\0\g\l\n\j\y\p\3\6\c\p\m\b\p\p\9\d\5\2\r\5\i\h\1\2\e\2\3\t\6\b\f\p\y\y\d\g\e\u\3\5\1\6\x\r\b\j\v\o\d\9\7\v\o\d\w\7\t\a\7\1\r\7\0\x\6\6\h\h\3\b\n\x\7\c\f\6\5\8\k\7\v\z\5\x\t\1\1\4\i\t\v\e\f\0\x\g\5\0\3\f\b\7\r\w\u\l\c\f\c\u\p\5\q\n\h\j\8\r\7\c\6\y\r\h\q\b ]] 00:08:47.564 19:28:40 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:47.564 19:28:40 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:08:47.564 19:28:40 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:08:47.564 19:28:40 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:08:47.564 19:28:40 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:47.564 19:28:40 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:47.565 [2024-12-05 19:28:40.856391] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:08:47.565 [2024-12-05 19:28:40.856495] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60304 ] 00:08:47.565 [2024-12-05 19:28:40.998537] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.824 [2024-12-05 19:28:41.044715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.824 [2024-12-05 19:28:41.097462] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:47.824  [2024-12-05T19:28:41.519Z] Copying: 512/512 [B] (average 500 kBps) 00:08:48.082 00:08:48.083 19:28:41 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ b4c420t43nzbn398xy4b15fqnpju9qbyug0h12jzana2en37bjj1gqgqz6j7umpw5yfhwez41a6762eetri5v2cu3k337lu6xzqzx0hcgfoavigf083ys9sr28o34b9u47ze44vh01ldjqwqqg9rpfjpe67yurwl8108pw2skf5w7xvwfuyqqhialg6mjptcslotbkcraat3emw9sge0wlydiiiwj582iwyx3atqq1ng7jdz1dgg2nszs8edz0w9div9jwyl4xzii772fys6nse68qrhdcxsf3rydeocay6aeeq8fwnxqnovelnyd4a8sccyr3bgxup97jr2s9d830p67jqgnaybgbhv2tzxaux9z77eqv1637qy8qa0c9ajyghpid70zj68uy08ppqfj54lnn92btaclqpgvmgznmewzdtamdqderhzqr49c4xxv9i23k6hx9m9psjz9i4by4mcyfhh7laxfnisepok35ddnnoonplqo21grndfy2js == \b\4\c\4\2\0\t\4\3\n\z\b\n\3\9\8\x\y\4\b\1\5\f\q\n\p\j\u\9\q\b\y\u\g\0\h\1\2\j\z\a\n\a\2\e\n\3\7\b\j\j\1\g\q\g\q\z\6\j\7\u\m\p\w\5\y\f\h\w\e\z\4\1\a\6\7\6\2\e\e\t\r\i\5\v\2\c\u\3\k\3\3\7\l\u\6\x\z\q\z\x\0\h\c\g\f\o\a\v\i\g\f\0\8\3\y\s\9\s\r\2\8\o\3\4\b\9\u\4\7\z\e\4\4\v\h\0\1\l\d\j\q\w\q\q\g\9\r\p\f\j\p\e\6\7\y\u\r\w\l\8\1\0\8\p\w\2\s\k\f\5\w\7\x\v\w\f\u\y\q\q\h\i\a\l\g\6\m\j\p\t\c\s\l\o\t\b\k\c\r\a\a\t\3\e\m\w\9\s\g\e\0\w\l\y\d\i\i\i\w\j\5\8\2\i\w\y\x\3\a\t\q\q\1\n\g\7\j\d\z\1\d\g\g\2\n\s\z\s\8\e\d\z\0\w\9\d\i\v\9\j\w\y\l\4\x\z\i\i\7\7\2\f\y\s\6\n\s\e\6\8\q\r\h\d\c\x\s\f\3\r\y\d\e\o\c\a\y\6\a\e\e\q\8\f\w\n\x\q\n\o\v\e\l\n\y\d\4\a\8\s\c\c\y\r\3\b\g\x\u\p\9\7\j\r\2\s\9\d\8\3\0\p\6\7\j\q\g\n\a\y\b\g\b\h\v\2\t\z\x\a\u\x\9\z\7\7\e\q\v\1\6\3\7\q\y\8\q\a\0\c\9\a\j\y\g\h\p\i\d\7\0\z\j\6\8\u\y\0\8\p\p\q\f\j\5\4\l\n\n\9\2\b\t\a\c\l\q\p\g\v\m\g\z\n\m\e\w\z\d\t\a\m\d\q\d\e\r\h\z\q\r\4\9\c\4\x\x\v\9\i\2\3\k\6\h\x\9\m\9\p\s\j\z\9\i\4\b\y\4\m\c\y\f\h\h\7\l\a\x\f\n\i\s\e\p\o\k\3\5\d\d\n\n\o\o\n\p\l\q\o\2\1\g\r\n\d\f\y\2\j\s ]] 00:08:48.083 19:28:41 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:48.083 19:28:41 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:08:48.083 [2024-12-05 19:28:41.376689] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:08:48.083 [2024-12-05 19:28:41.376794] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60308 ] 00:08:48.342 [2024-12-05 19:28:41.524862] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:48.342 [2024-12-05 19:28:41.574785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.342 [2024-12-05 19:28:41.628509] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:48.342  [2024-12-05T19:28:42.048Z] Copying: 512/512 [B] (average 500 kBps) 00:08:48.611 00:08:48.611 19:28:41 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ b4c420t43nzbn398xy4b15fqnpju9qbyug0h12jzana2en37bjj1gqgqz6j7umpw5yfhwez41a6762eetri5v2cu3k337lu6xzqzx0hcgfoavigf083ys9sr28o34b9u47ze44vh01ldjqwqqg9rpfjpe67yurwl8108pw2skf5w7xvwfuyqqhialg6mjptcslotbkcraat3emw9sge0wlydiiiwj582iwyx3atqq1ng7jdz1dgg2nszs8edz0w9div9jwyl4xzii772fys6nse68qrhdcxsf3rydeocay6aeeq8fwnxqnovelnyd4a8sccyr3bgxup97jr2s9d830p67jqgnaybgbhv2tzxaux9z77eqv1637qy8qa0c9ajyghpid70zj68uy08ppqfj54lnn92btaclqpgvmgznmewzdtamdqderhzqr49c4xxv9i23k6hx9m9psjz9i4by4mcyfhh7laxfnisepok35ddnnoonplqo21grndfy2js == \b\4\c\4\2\0\t\4\3\n\z\b\n\3\9\8\x\y\4\b\1\5\f\q\n\p\j\u\9\q\b\y\u\g\0\h\1\2\j\z\a\n\a\2\e\n\3\7\b\j\j\1\g\q\g\q\z\6\j\7\u\m\p\w\5\y\f\h\w\e\z\4\1\a\6\7\6\2\e\e\t\r\i\5\v\2\c\u\3\k\3\3\7\l\u\6\x\z\q\z\x\0\h\c\g\f\o\a\v\i\g\f\0\8\3\y\s\9\s\r\2\8\o\3\4\b\9\u\4\7\z\e\4\4\v\h\0\1\l\d\j\q\w\q\q\g\9\r\p\f\j\p\e\6\7\y\u\r\w\l\8\1\0\8\p\w\2\s\k\f\5\w\7\x\v\w\f\u\y\q\q\h\i\a\l\g\6\m\j\p\t\c\s\l\o\t\b\k\c\r\a\a\t\3\e\m\w\9\s\g\e\0\w\l\y\d\i\i\i\w\j\5\8\2\i\w\y\x\3\a\t\q\q\1\n\g\7\j\d\z\1\d\g\g\2\n\s\z\s\8\e\d\z\0\w\9\d\i\v\9\j\w\y\l\4\x\z\i\i\7\7\2\f\y\s\6\n\s\e\6\8\q\r\h\d\c\x\s\f\3\r\y\d\e\o\c\a\y\6\a\e\e\q\8\f\w\n\x\q\n\o\v\e\l\n\y\d\4\a\8\s\c\c\y\r\3\b\g\x\u\p\9\7\j\r\2\s\9\d\8\3\0\p\6\7\j\q\g\n\a\y\b\g\b\h\v\2\t\z\x\a\u\x\9\z\7\7\e\q\v\1\6\3\7\q\y\8\q\a\0\c\9\a\j\y\g\h\p\i\d\7\0\z\j\6\8\u\y\0\8\p\p\q\f\j\5\4\l\n\n\9\2\b\t\a\c\l\q\p\g\v\m\g\z\n\m\e\w\z\d\t\a\m\d\q\d\e\r\h\z\q\r\4\9\c\4\x\x\v\9\i\2\3\k\6\h\x\9\m\9\p\s\j\z\9\i\4\b\y\4\m\c\y\f\h\h\7\l\a\x\f\n\i\s\e\p\o\k\3\5\d\d\n\n\o\o\n\p\l\q\o\2\1\g\r\n\d\f\y\2\j\s ]] 00:08:48.611 19:28:41 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:48.611 19:28:41 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:08:48.611 [2024-12-05 19:28:41.901458] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:08:48.611 [2024-12-05 19:28:41.901575] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60325 ] 00:08:48.872 [2024-12-05 19:28:42.049891] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:48.872 [2024-12-05 19:28:42.107160] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.872 [2024-12-05 19:28:42.167746] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:48.872  [2024-12-05T19:28:42.582Z] Copying: 512/512 [B] (average 166 kBps) 00:08:49.145 00:08:49.146 19:28:42 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ b4c420t43nzbn398xy4b15fqnpju9qbyug0h12jzana2en37bjj1gqgqz6j7umpw5yfhwez41a6762eetri5v2cu3k337lu6xzqzx0hcgfoavigf083ys9sr28o34b9u47ze44vh01ldjqwqqg9rpfjpe67yurwl8108pw2skf5w7xvwfuyqqhialg6mjptcslotbkcraat3emw9sge0wlydiiiwj582iwyx3atqq1ng7jdz1dgg2nszs8edz0w9div9jwyl4xzii772fys6nse68qrhdcxsf3rydeocay6aeeq8fwnxqnovelnyd4a8sccyr3bgxup97jr2s9d830p67jqgnaybgbhv2tzxaux9z77eqv1637qy8qa0c9ajyghpid70zj68uy08ppqfj54lnn92btaclqpgvmgznmewzdtamdqderhzqr49c4xxv9i23k6hx9m9psjz9i4by4mcyfhh7laxfnisepok35ddnnoonplqo21grndfy2js == \b\4\c\4\2\0\t\4\3\n\z\b\n\3\9\8\x\y\4\b\1\5\f\q\n\p\j\u\9\q\b\y\u\g\0\h\1\2\j\z\a\n\a\2\e\n\3\7\b\j\j\1\g\q\g\q\z\6\j\7\u\m\p\w\5\y\f\h\w\e\z\4\1\a\6\7\6\2\e\e\t\r\i\5\v\2\c\u\3\k\3\3\7\l\u\6\x\z\q\z\x\0\h\c\g\f\o\a\v\i\g\f\0\8\3\y\s\9\s\r\2\8\o\3\4\b\9\u\4\7\z\e\4\4\v\h\0\1\l\d\j\q\w\q\q\g\9\r\p\f\j\p\e\6\7\y\u\r\w\l\8\1\0\8\p\w\2\s\k\f\5\w\7\x\v\w\f\u\y\q\q\h\i\a\l\g\6\m\j\p\t\c\s\l\o\t\b\k\c\r\a\a\t\3\e\m\w\9\s\g\e\0\w\l\y\d\i\i\i\w\j\5\8\2\i\w\y\x\3\a\t\q\q\1\n\g\7\j\d\z\1\d\g\g\2\n\s\z\s\8\e\d\z\0\w\9\d\i\v\9\j\w\y\l\4\x\z\i\i\7\7\2\f\y\s\6\n\s\e\6\8\q\r\h\d\c\x\s\f\3\r\y\d\e\o\c\a\y\6\a\e\e\q\8\f\w\n\x\q\n\o\v\e\l\n\y\d\4\a\8\s\c\c\y\r\3\b\g\x\u\p\9\7\j\r\2\s\9\d\8\3\0\p\6\7\j\q\g\n\a\y\b\g\b\h\v\2\t\z\x\a\u\x\9\z\7\7\e\q\v\1\6\3\7\q\y\8\q\a\0\c\9\a\j\y\g\h\p\i\d\7\0\z\j\6\8\u\y\0\8\p\p\q\f\j\5\4\l\n\n\9\2\b\t\a\c\l\q\p\g\v\m\g\z\n\m\e\w\z\d\t\a\m\d\q\d\e\r\h\z\q\r\4\9\c\4\x\x\v\9\i\2\3\k\6\h\x\9\m\9\p\s\j\z\9\i\4\b\y\4\m\c\y\f\h\h\7\l\a\x\f\n\i\s\e\p\o\k\3\5\d\d\n\n\o\o\n\p\l\q\o\2\1\g\r\n\d\f\y\2\j\s ]] 00:08:49.146 19:28:42 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:49.146 19:28:42 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:49.146 [2024-12-05 19:28:42.469294] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:08:49.146 [2024-12-05 19:28:42.469627] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60329 ] 00:08:49.417 [2024-12-05 19:28:42.617634] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:49.417 [2024-12-05 19:28:42.666155] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.417 [2024-12-05 19:28:42.720421] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:49.417  [2024-12-05T19:28:43.114Z] Copying: 512/512 [B] (average 250 kBps) 00:08:49.677 00:08:49.677 19:28:42 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ b4c420t43nzbn398xy4b15fqnpju9qbyug0h12jzana2en37bjj1gqgqz6j7umpw5yfhwez41a6762eetri5v2cu3k337lu6xzqzx0hcgfoavigf083ys9sr28o34b9u47ze44vh01ldjqwqqg9rpfjpe67yurwl8108pw2skf5w7xvwfuyqqhialg6mjptcslotbkcraat3emw9sge0wlydiiiwj582iwyx3atqq1ng7jdz1dgg2nszs8edz0w9div9jwyl4xzii772fys6nse68qrhdcxsf3rydeocay6aeeq8fwnxqnovelnyd4a8sccyr3bgxup97jr2s9d830p67jqgnaybgbhv2tzxaux9z77eqv1637qy8qa0c9ajyghpid70zj68uy08ppqfj54lnn92btaclqpgvmgznmewzdtamdqderhzqr49c4xxv9i23k6hx9m9psjz9i4by4mcyfhh7laxfnisepok35ddnnoonplqo21grndfy2js == \b\4\c\4\2\0\t\4\3\n\z\b\n\3\9\8\x\y\4\b\1\5\f\q\n\p\j\u\9\q\b\y\u\g\0\h\1\2\j\z\a\n\a\2\e\n\3\7\b\j\j\1\g\q\g\q\z\6\j\7\u\m\p\w\5\y\f\h\w\e\z\4\1\a\6\7\6\2\e\e\t\r\i\5\v\2\c\u\3\k\3\3\7\l\u\6\x\z\q\z\x\0\h\c\g\f\o\a\v\i\g\f\0\8\3\y\s\9\s\r\2\8\o\3\4\b\9\u\4\7\z\e\4\4\v\h\0\1\l\d\j\q\w\q\q\g\9\r\p\f\j\p\e\6\7\y\u\r\w\l\8\1\0\8\p\w\2\s\k\f\5\w\7\x\v\w\f\u\y\q\q\h\i\a\l\g\6\m\j\p\t\c\s\l\o\t\b\k\c\r\a\a\t\3\e\m\w\9\s\g\e\0\w\l\y\d\i\i\i\w\j\5\8\2\i\w\y\x\3\a\t\q\q\1\n\g\7\j\d\z\1\d\g\g\2\n\s\z\s\8\e\d\z\0\w\9\d\i\v\9\j\w\y\l\4\x\z\i\i\7\7\2\f\y\s\6\n\s\e\6\8\q\r\h\d\c\x\s\f\3\r\y\d\e\o\c\a\y\6\a\e\e\q\8\f\w\n\x\q\n\o\v\e\l\n\y\d\4\a\8\s\c\c\y\r\3\b\g\x\u\p\9\7\j\r\2\s\9\d\8\3\0\p\6\7\j\q\g\n\a\y\b\g\b\h\v\2\t\z\x\a\u\x\9\z\7\7\e\q\v\1\6\3\7\q\y\8\q\a\0\c\9\a\j\y\g\h\p\i\d\7\0\z\j\6\8\u\y\0\8\p\p\q\f\j\5\4\l\n\n\9\2\b\t\a\c\l\q\p\g\v\m\g\z\n\m\e\w\z\d\t\a\m\d\q\d\e\r\h\z\q\r\4\9\c\4\x\x\v\9\i\2\3\k\6\h\x\9\m\9\p\s\j\z\9\i\4\b\y\4\m\c\y\f\h\h\7\l\a\x\f\n\i\s\e\p\o\k\3\5\d\d\n\n\o\o\n\p\l\q\o\2\1\g\r\n\d\f\y\2\j\s ]] 00:08:49.677 00:08:49.677 real 0m4.325s 00:08:49.677 user 0m2.317s 00:08:49.677 sys 0m2.213s 00:08:49.677 19:28:42 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:49.677 ************************************ 00:08:49.677 END TEST dd_flags_misc 00:08:49.677 ************************************ 00:08:49.677 19:28:42 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:08:49.677 19:28:42 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:08:49.677 19:28:42 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:08:49.677 * Second test run, disabling liburing, forcing AIO 00:08:49.677 19:28:42 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:08:49.677 19:28:42 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:08:49.677 19:28:42 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:49.677 19:28:42 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:49.677 19:28:42 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:49.677 ************************************ 00:08:49.677 START TEST dd_flag_append_forced_aio 00:08:49.677 ************************************ 00:08:49.677 19:28:42 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1129 -- # append 00:08:49.677 19:28:42 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:08:49.677 19:28:42 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:08:49.677 19:28:42 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:08:49.677 19:28:42 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:49.677 19:28:42 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:49.677 19:28:42 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=i71k3ctl8qp7zy2h3zbp6do2u4b7vm2v 00:08:49.677 19:28:42 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:08:49.677 19:28:43 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:49.677 19:28:43 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:49.677 19:28:43 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=2900r8o88tvi8iyn38m0kkjzvhsm8i8z 00:08:49.677 19:28:43 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s i71k3ctl8qp7zy2h3zbp6do2u4b7vm2v 00:08:49.677 19:28:43 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s 2900r8o88tvi8iyn38m0kkjzvhsm8i8z 00:08:49.677 19:28:43 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:08:49.677 [2024-12-05 19:28:43.060186] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:08:49.677 [2024-12-05 19:28:43.060290] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60363 ] 00:08:49.936 [2024-12-05 19:28:43.203215] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:49.936 [2024-12-05 19:28:43.258740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.936 [2024-12-05 19:28:43.320864] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:49.936  [2024-12-05T19:28:43.633Z] Copying: 32/32 [B] (average 31 kBps) 00:08:50.196 00:08:50.196 ************************************ 00:08:50.196 END TEST dd_flag_append_forced_aio 00:08:50.196 ************************************ 00:08:50.196 19:28:43 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ 2900r8o88tvi8iyn38m0kkjzvhsm8i8zi71k3ctl8qp7zy2h3zbp6do2u4b7vm2v == \2\9\0\0\r\8\o\8\8\t\v\i\8\i\y\n\3\8\m\0\k\k\j\z\v\h\s\m\8\i\8\z\i\7\1\k\3\c\t\l\8\q\p\7\z\y\2\h\3\z\b\p\6\d\o\2\u\4\b\7\v\m\2\v ]] 00:08:50.196 00:08:50.196 real 0m0.602s 00:08:50.196 user 0m0.307s 00:08:50.196 sys 0m0.157s 00:08:50.196 19:28:43 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:50.196 19:28:43 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:50.455 19:28:43 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:08:50.455 19:28:43 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:50.455 19:28:43 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:50.455 19:28:43 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:50.455 ************************************ 00:08:50.455 START TEST dd_flag_directory_forced_aio 00:08:50.455 ************************************ 00:08:50.455 19:28:43 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1129 -- # directory 00:08:50.455 19:28:43 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:50.455 19:28:43 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:08:50.455 19:28:43 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:50.455 19:28:43 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:50.455 19:28:43 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:50.455 19:28:43 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:50.455 19:28:43 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:50.455 19:28:43 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:50.455 19:28:43 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:50.456 19:28:43 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:50.456 19:28:43 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:50.456 19:28:43 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:50.456 [2024-12-05 19:28:43.713207] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:08:50.456 [2024-12-05 19:28:43.713398] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60390 ] 00:08:50.456 [2024-12-05 19:28:43.858746] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:50.714 [2024-12-05 19:28:43.915061] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.714 [2024-12-05 19:28:43.972226] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:50.714 [2024-12-05 19:28:44.011677] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:50.714 [2024-12-05 19:28:44.011733] spdk_dd.c:1081:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:50.714 [2024-12-05 19:28:44.011752] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:50.714 [2024-12-05 19:28:44.141678] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:08:50.972 19:28:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # es=236 00:08:50.972 19:28:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:50.972 19:28:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@664 -- # es=108 00:08:50.972 19:28:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:08:50.973 19:28:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:08:50.973 19:28:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:50.973 19:28:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:50.973 19:28:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:08:50.973 19:28:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:50.973 19:28:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:50.973 19:28:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:50.973 19:28:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:50.973 19:28:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:50.973 19:28:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:50.973 19:28:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:50.973 19:28:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:50.973 19:28:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:50.973 19:28:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:50.973 [2024-12-05 19:28:44.294906] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:08:50.973 [2024-12-05 19:28:44.294999] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60401 ] 00:08:51.231 [2024-12-05 19:28:44.440681] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:51.231 [2024-12-05 19:28:44.497561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.231 [2024-12-05 19:28:44.554737] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:51.231 [2024-12-05 19:28:44.596838] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:51.231 [2024-12-05 19:28:44.597140] spdk_dd.c:1130:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:51.231 [2024-12-05 19:28:44.597169] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:51.490 [2024-12-05 19:28:44.724399] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:08:51.490 19:28:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # es=236 00:08:51.490 19:28:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:51.490 19:28:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@664 -- # es=108 00:08:51.490 19:28:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:08:51.490 19:28:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:08:51.490 19:28:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:51.490 00:08:51.490 real 0m1.143s 00:08:51.490 user 0m0.645s 00:08:51.490 sys 0m0.286s 00:08:51.490 19:28:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:51.490 ************************************ 00:08:51.490 END TEST dd_flag_directory_forced_aio 00:08:51.490 ************************************ 00:08:51.490 19:28:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:51.490 19:28:44 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:08:51.490 19:28:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:51.490 19:28:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:51.490 19:28:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:51.491 ************************************ 00:08:51.491 START TEST dd_flag_nofollow_forced_aio 00:08:51.491 ************************************ 00:08:51.491 19:28:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1129 -- # nofollow 00:08:51.491 19:28:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:51.491 19:28:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:51.491 19:28:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:51.491 19:28:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:51.491 19:28:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:51.491 19:28:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:08:51.491 19:28:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:51.491 19:28:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:51.491 19:28:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:51.491 19:28:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:51.491 19:28:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:51.491 19:28:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:51.491 19:28:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:51.491 19:28:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:51.491 19:28:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:51.491 19:28:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:51.491 [2024-12-05 19:28:44.911861] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:08:51.491 [2024-12-05 19:28:44.911960] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60430 ] 00:08:51.750 [2024-12-05 19:28:45.061662] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:51.750 [2024-12-05 19:28:45.120036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.750 [2024-12-05 19:28:45.182062] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:52.008 [2024-12-05 19:28:45.223811] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:08:52.008 [2024-12-05 19:28:45.223866] spdk_dd.c:1081:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:08:52.008 [2024-12-05 19:28:45.223916] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:52.008 [2024-12-05 19:28:45.357959] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:08:52.008 19:28:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # es=216 00:08:52.008 19:28:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:52.008 19:28:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@664 -- # es=88 00:08:52.008 19:28:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:08:52.008 19:28:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:08:52.008 19:28:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:52.008 19:28:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:52.008 19:28:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:08:52.008 19:28:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:52.008 19:28:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:52.008 19:28:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:52.008 19:28:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:52.008 19:28:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:52.008 19:28:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:52.008 19:28:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:52.008 19:28:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:52.008 19:28:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:52.008 19:28:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:52.266 [2024-12-05 19:28:45.489740] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:08:52.266 [2024-12-05 19:28:45.490000] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60439 ] 00:08:52.266 [2024-12-05 19:28:45.640366] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.266 [2024-12-05 19:28:45.695880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.526 [2024-12-05 19:28:45.754795] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:52.526 [2024-12-05 19:28:45.795674] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:08:52.526 [2024-12-05 19:28:45.795945] spdk_dd.c:1130:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:08:52.526 [2024-12-05 19:28:45.796127] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:52.526 [2024-12-05 19:28:45.926353] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:08:52.784 19:28:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # es=216 00:08:52.784 19:28:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:52.784 19:28:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@664 -- # es=88 00:08:52.784 19:28:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:08:52.784 19:28:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:08:52.784 19:28:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:52.784 19:28:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:08:52.784 19:28:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:52.784 19:28:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:52.784 19:28:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:52.784 [2024-12-05 19:28:46.064543] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:08:52.784 [2024-12-05 19:28:46.064765] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60452 ] 00:08:52.784 [2024-12-05 19:28:46.211515] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:53.042 [2024-12-05 19:28:46.268750] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.042 [2024-12-05 19:28:46.329026] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:53.042  [2024-12-05T19:28:46.780Z] Copying: 512/512 [B] (average 500 kBps) 00:08:53.343 00:08:53.343 ************************************ 00:08:53.343 END TEST dd_flag_nofollow_forced_aio 00:08:53.343 ************************************ 00:08:53.343 19:28:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ 3p36ijxsudlf89c9tlpt9aar8l6b3bfgk4d8t4lqyizk3dkhmymtofw70pkx6vskcg7dn8zalpxzfuhg7rn4og4wej7onpaqhx3zqjpftig91wszw0fa5ybbc5nz8c8lfq9mhrqvyacytonkq0ltiih94a1nbn2p2j91bw4scy2kavcuv86sl3tzbka0k6ti6v73l66vivbznabr5s4hov99bzo2m4hzsnbhkggergtkcvzp7j4sc5hssj13adgobx3aqiik3esmpogl8vjz4i3mk78ndjzhmxarx1nab2i3jttfsp2kad1h6b5cdjj43qb0epy4dwhyphp2sy3vryvn796eve00eu26unj3me1lusku1pxlsitl4lvzf6k0uy4ccb3m116oardj6t6ueoopkrmr3vyytro17uc956kbvjfr4sfk0j0nqfh5sjya183dd8bvz5fozpijldjvm6o8pfafrc1leamzj2z90erewi1ws937or0fze2egkzw == \3\p\3\6\i\j\x\s\u\d\l\f\8\9\c\9\t\l\p\t\9\a\a\r\8\l\6\b\3\b\f\g\k\4\d\8\t\4\l\q\y\i\z\k\3\d\k\h\m\y\m\t\o\f\w\7\0\p\k\x\6\v\s\k\c\g\7\d\n\8\z\a\l\p\x\z\f\u\h\g\7\r\n\4\o\g\4\w\e\j\7\o\n\p\a\q\h\x\3\z\q\j\p\f\t\i\g\9\1\w\s\z\w\0\f\a\5\y\b\b\c\5\n\z\8\c\8\l\f\q\9\m\h\r\q\v\y\a\c\y\t\o\n\k\q\0\l\t\i\i\h\9\4\a\1\n\b\n\2\p\2\j\9\1\b\w\4\s\c\y\2\k\a\v\c\u\v\8\6\s\l\3\t\z\b\k\a\0\k\6\t\i\6\v\7\3\l\6\6\v\i\v\b\z\n\a\b\r\5\s\4\h\o\v\9\9\b\z\o\2\m\4\h\z\s\n\b\h\k\g\g\e\r\g\t\k\c\v\z\p\7\j\4\s\c\5\h\s\s\j\1\3\a\d\g\o\b\x\3\a\q\i\i\k\3\e\s\m\p\o\g\l\8\v\j\z\4\i\3\m\k\7\8\n\d\j\z\h\m\x\a\r\x\1\n\a\b\2\i\3\j\t\t\f\s\p\2\k\a\d\1\h\6\b\5\c\d\j\j\4\3\q\b\0\e\p\y\4\d\w\h\y\p\h\p\2\s\y\3\v\r\y\v\n\7\9\6\e\v\e\0\0\e\u\2\6\u\n\j\3\m\e\1\l\u\s\k\u\1\p\x\l\s\i\t\l\4\l\v\z\f\6\k\0\u\y\4\c\c\b\3\m\1\1\6\o\a\r\d\j\6\t\6\u\e\o\o\p\k\r\m\r\3\v\y\y\t\r\o\1\7\u\c\9\5\6\k\b\v\j\f\r\4\s\f\k\0\j\0\n\q\f\h\5\s\j\y\a\1\8\3\d\d\8\b\v\z\5\f\o\z\p\i\j\l\d\j\v\m\6\o\8\p\f\a\f\r\c\1\l\e\a\m\z\j\2\z\9\0\e\r\e\w\i\1\w\s\9\3\7\o\r\0\f\z\e\2\e\g\k\z\w ]] 00:08:53.343 00:08:53.343 real 0m1.748s 00:08:53.343 user 0m0.945s 00:08:53.343 sys 0m0.469s 00:08:53.343 19:28:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:53.343 19:28:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:53.343 19:28:46 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:08:53.343 19:28:46 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:53.344 19:28:46 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:53.344 19:28:46 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:53.344 ************************************ 00:08:53.344 START TEST dd_flag_noatime_forced_aio 00:08:53.344 ************************************ 00:08:53.344 19:28:46 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1129 -- # noatime 00:08:53.344 19:28:46 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:08:53.344 19:28:46 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:08:53.344 19:28:46 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:08:53.344 19:28:46 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:53.344 19:28:46 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:53.344 19:28:46 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:53.344 19:28:46 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1733426926 00:08:53.344 19:28:46 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:53.344 19:28:46 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1733426926 00:08:53.344 19:28:46 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:08:54.280 19:28:47 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:54.538 [2024-12-05 19:28:47.716796] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:08:54.538 [2024-12-05 19:28:47.717154] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60493 ] 00:08:54.538 [2024-12-05 19:28:47.861515] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:54.538 [2024-12-05 19:28:47.918089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.796 [2024-12-05 19:28:47.977248] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:54.796  [2024-12-05T19:28:48.491Z] Copying: 512/512 [B] (average 500 kBps) 00:08:55.054 00:08:55.054 19:28:48 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:55.054 19:28:48 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1733426926 )) 00:08:55.054 19:28:48 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:55.054 19:28:48 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1733426926 )) 00:08:55.054 19:28:48 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:55.054 [2024-12-05 19:28:48.319539] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:08:55.054 [2024-12-05 19:28:48.319639] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60504 ] 00:08:55.055 [2024-12-05 19:28:48.468209] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:55.313 [2024-12-05 19:28:48.528794] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:55.313 [2024-12-05 19:28:48.588994] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:55.313  [2024-12-05T19:28:49.009Z] Copying: 512/512 [B] (average 500 kBps) 00:08:55.572 00:08:55.572 19:28:48 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:55.572 ************************************ 00:08:55.572 END TEST dd_flag_noatime_forced_aio 00:08:55.572 ************************************ 00:08:55.572 19:28:48 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1733426928 )) 00:08:55.572 00:08:55.572 real 0m2.208s 00:08:55.572 user 0m0.635s 00:08:55.572 sys 0m0.326s 00:08:55.572 19:28:48 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:55.572 19:28:48 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:55.572 19:28:48 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:08:55.572 19:28:48 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:55.572 19:28:48 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:55.572 19:28:48 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:55.572 ************************************ 00:08:55.572 START TEST dd_flags_misc_forced_aio 00:08:55.572 ************************************ 00:08:55.572 19:28:48 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1129 -- # io 00:08:55.572 19:28:48 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:08:55.572 19:28:48 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:08:55.572 19:28:48 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:08:55.572 19:28:48 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:55.572 19:28:48 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:08:55.572 19:28:48 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:55.572 19:28:48 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:55.572 19:28:48 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:55.572 19:28:48 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:55.572 [2024-12-05 19:28:48.974420] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:08:55.572 [2024-12-05 19:28:48.974511] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60531 ] 00:08:55.831 [2024-12-05 19:28:49.122495] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:55.831 [2024-12-05 19:28:49.180159] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:55.831 [2024-12-05 19:28:49.244145] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:56.121  [2024-12-05T19:28:49.558Z] Copying: 512/512 [B] (average 500 kBps) 00:08:56.121 00:08:56.121 19:28:49 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ zav7h7azaov8mznlz2ihyvbj39vewag7qi8qbqjpef7wro1a42rj5fq8kc17gxfftooutehlnjquddpj8uw23ehhkyxz9ybjmri5v2yefxvmt2si3ug30q3zjs6f8xrl4bxza2r77yipzufwac47jx75552hdrp8rmzadjdo6g8hg6shl4276p18iopyozcvleg4trmev6cnc13u0rnln63ec9yp08ywk1w0g8e23qj105rincslft8tbaqu8kmqb0wl7ziv0yelewafxxvjlrblho3ghvxl9e007smx0x05zaqgnps4rmzil8w3x1wh6zvn1j7jzat5uookkzajkpcedo6mpog8697t596ixdixyb4tkenlnv8ecrqktj2dohdkxwdv2pgtbz2nrfct7d3cs3hfbkk5qldzz149w5t9azg97o6stkr3y64mbs5i77ugvcevuvlee9iwybgtp3yofee8j64zkil8bmk00i16vvpwz81i63e0yfwukgy4 == \z\a\v\7\h\7\a\z\a\o\v\8\m\z\n\l\z\2\i\h\y\v\b\j\3\9\v\e\w\a\g\7\q\i\8\q\b\q\j\p\e\f\7\w\r\o\1\a\4\2\r\j\5\f\q\8\k\c\1\7\g\x\f\f\t\o\o\u\t\e\h\l\n\j\q\u\d\d\p\j\8\u\w\2\3\e\h\h\k\y\x\z\9\y\b\j\m\r\i\5\v\2\y\e\f\x\v\m\t\2\s\i\3\u\g\3\0\q\3\z\j\s\6\f\8\x\r\l\4\b\x\z\a\2\r\7\7\y\i\p\z\u\f\w\a\c\4\7\j\x\7\5\5\5\2\h\d\r\p\8\r\m\z\a\d\j\d\o\6\g\8\h\g\6\s\h\l\4\2\7\6\p\1\8\i\o\p\y\o\z\c\v\l\e\g\4\t\r\m\e\v\6\c\n\c\1\3\u\0\r\n\l\n\6\3\e\c\9\y\p\0\8\y\w\k\1\w\0\g\8\e\2\3\q\j\1\0\5\r\i\n\c\s\l\f\t\8\t\b\a\q\u\8\k\m\q\b\0\w\l\7\z\i\v\0\y\e\l\e\w\a\f\x\x\v\j\l\r\b\l\h\o\3\g\h\v\x\l\9\e\0\0\7\s\m\x\0\x\0\5\z\a\q\g\n\p\s\4\r\m\z\i\l\8\w\3\x\1\w\h\6\z\v\n\1\j\7\j\z\a\t\5\u\o\o\k\k\z\a\j\k\p\c\e\d\o\6\m\p\o\g\8\6\9\7\t\5\9\6\i\x\d\i\x\y\b\4\t\k\e\n\l\n\v\8\e\c\r\q\k\t\j\2\d\o\h\d\k\x\w\d\v\2\p\g\t\b\z\2\n\r\f\c\t\7\d\3\c\s\3\h\f\b\k\k\5\q\l\d\z\z\1\4\9\w\5\t\9\a\z\g\9\7\o\6\s\t\k\r\3\y\6\4\m\b\s\5\i\7\7\u\g\v\c\e\v\u\v\l\e\e\9\i\w\y\b\g\t\p\3\y\o\f\e\e\8\j\6\4\z\k\i\l\8\b\m\k\0\0\i\1\6\v\v\p\w\z\8\1\i\6\3\e\0\y\f\w\u\k\g\y\4 ]] 00:08:56.121 19:28:49 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:56.121 19:28:49 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:08:56.379 [2024-12-05 19:28:49.578400] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:08:56.379 [2024-12-05 19:28:49.578545] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60538 ] 00:08:56.379 [2024-12-05 19:28:49.731225] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:56.379 [2024-12-05 19:28:49.787924] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:56.637 [2024-12-05 19:28:49.847440] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:56.637  [2024-12-05T19:28:50.331Z] Copying: 512/512 [B] (average 500 kBps) 00:08:56.894 00:08:56.895 19:28:50 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ zav7h7azaov8mznlz2ihyvbj39vewag7qi8qbqjpef7wro1a42rj5fq8kc17gxfftooutehlnjquddpj8uw23ehhkyxz9ybjmri5v2yefxvmt2si3ug30q3zjs6f8xrl4bxza2r77yipzufwac47jx75552hdrp8rmzadjdo6g8hg6shl4276p18iopyozcvleg4trmev6cnc13u0rnln63ec9yp08ywk1w0g8e23qj105rincslft8tbaqu8kmqb0wl7ziv0yelewafxxvjlrblho3ghvxl9e007smx0x05zaqgnps4rmzil8w3x1wh6zvn1j7jzat5uookkzajkpcedo6mpog8697t596ixdixyb4tkenlnv8ecrqktj2dohdkxwdv2pgtbz2nrfct7d3cs3hfbkk5qldzz149w5t9azg97o6stkr3y64mbs5i77ugvcevuvlee9iwybgtp3yofee8j64zkil8bmk00i16vvpwz81i63e0yfwukgy4 == \z\a\v\7\h\7\a\z\a\o\v\8\m\z\n\l\z\2\i\h\y\v\b\j\3\9\v\e\w\a\g\7\q\i\8\q\b\q\j\p\e\f\7\w\r\o\1\a\4\2\r\j\5\f\q\8\k\c\1\7\g\x\f\f\t\o\o\u\t\e\h\l\n\j\q\u\d\d\p\j\8\u\w\2\3\e\h\h\k\y\x\z\9\y\b\j\m\r\i\5\v\2\y\e\f\x\v\m\t\2\s\i\3\u\g\3\0\q\3\z\j\s\6\f\8\x\r\l\4\b\x\z\a\2\r\7\7\y\i\p\z\u\f\w\a\c\4\7\j\x\7\5\5\5\2\h\d\r\p\8\r\m\z\a\d\j\d\o\6\g\8\h\g\6\s\h\l\4\2\7\6\p\1\8\i\o\p\y\o\z\c\v\l\e\g\4\t\r\m\e\v\6\c\n\c\1\3\u\0\r\n\l\n\6\3\e\c\9\y\p\0\8\y\w\k\1\w\0\g\8\e\2\3\q\j\1\0\5\r\i\n\c\s\l\f\t\8\t\b\a\q\u\8\k\m\q\b\0\w\l\7\z\i\v\0\y\e\l\e\w\a\f\x\x\v\j\l\r\b\l\h\o\3\g\h\v\x\l\9\e\0\0\7\s\m\x\0\x\0\5\z\a\q\g\n\p\s\4\r\m\z\i\l\8\w\3\x\1\w\h\6\z\v\n\1\j\7\j\z\a\t\5\u\o\o\k\k\z\a\j\k\p\c\e\d\o\6\m\p\o\g\8\6\9\7\t\5\9\6\i\x\d\i\x\y\b\4\t\k\e\n\l\n\v\8\e\c\r\q\k\t\j\2\d\o\h\d\k\x\w\d\v\2\p\g\t\b\z\2\n\r\f\c\t\7\d\3\c\s\3\h\f\b\k\k\5\q\l\d\z\z\1\4\9\w\5\t\9\a\z\g\9\7\o\6\s\t\k\r\3\y\6\4\m\b\s\5\i\7\7\u\g\v\c\e\v\u\v\l\e\e\9\i\w\y\b\g\t\p\3\y\o\f\e\e\8\j\6\4\z\k\i\l\8\b\m\k\0\0\i\1\6\v\v\p\w\z\8\1\i\6\3\e\0\y\f\w\u\k\g\y\4 ]] 00:08:56.895 19:28:50 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:56.895 19:28:50 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:08:56.895 [2024-12-05 19:28:50.173610] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:08:56.895 [2024-12-05 19:28:50.173956] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60546 ] 00:08:56.895 [2024-12-05 19:28:50.324727] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:57.153 [2024-12-05 19:28:50.380837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:57.153 [2024-12-05 19:28:50.441518] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:57.153  [2024-12-05T19:28:50.848Z] Copying: 512/512 [B] (average 125 kBps) 00:08:57.411 00:08:57.411 19:28:50 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ zav7h7azaov8mznlz2ihyvbj39vewag7qi8qbqjpef7wro1a42rj5fq8kc17gxfftooutehlnjquddpj8uw23ehhkyxz9ybjmri5v2yefxvmt2si3ug30q3zjs6f8xrl4bxza2r77yipzufwac47jx75552hdrp8rmzadjdo6g8hg6shl4276p18iopyozcvleg4trmev6cnc13u0rnln63ec9yp08ywk1w0g8e23qj105rincslft8tbaqu8kmqb0wl7ziv0yelewafxxvjlrblho3ghvxl9e007smx0x05zaqgnps4rmzil8w3x1wh6zvn1j7jzat5uookkzajkpcedo6mpog8697t596ixdixyb4tkenlnv8ecrqktj2dohdkxwdv2pgtbz2nrfct7d3cs3hfbkk5qldzz149w5t9azg97o6stkr3y64mbs5i77ugvcevuvlee9iwybgtp3yofee8j64zkil8bmk00i16vvpwz81i63e0yfwukgy4 == \z\a\v\7\h\7\a\z\a\o\v\8\m\z\n\l\z\2\i\h\y\v\b\j\3\9\v\e\w\a\g\7\q\i\8\q\b\q\j\p\e\f\7\w\r\o\1\a\4\2\r\j\5\f\q\8\k\c\1\7\g\x\f\f\t\o\o\u\t\e\h\l\n\j\q\u\d\d\p\j\8\u\w\2\3\e\h\h\k\y\x\z\9\y\b\j\m\r\i\5\v\2\y\e\f\x\v\m\t\2\s\i\3\u\g\3\0\q\3\z\j\s\6\f\8\x\r\l\4\b\x\z\a\2\r\7\7\y\i\p\z\u\f\w\a\c\4\7\j\x\7\5\5\5\2\h\d\r\p\8\r\m\z\a\d\j\d\o\6\g\8\h\g\6\s\h\l\4\2\7\6\p\1\8\i\o\p\y\o\z\c\v\l\e\g\4\t\r\m\e\v\6\c\n\c\1\3\u\0\r\n\l\n\6\3\e\c\9\y\p\0\8\y\w\k\1\w\0\g\8\e\2\3\q\j\1\0\5\r\i\n\c\s\l\f\t\8\t\b\a\q\u\8\k\m\q\b\0\w\l\7\z\i\v\0\y\e\l\e\w\a\f\x\x\v\j\l\r\b\l\h\o\3\g\h\v\x\l\9\e\0\0\7\s\m\x\0\x\0\5\z\a\q\g\n\p\s\4\r\m\z\i\l\8\w\3\x\1\w\h\6\z\v\n\1\j\7\j\z\a\t\5\u\o\o\k\k\z\a\j\k\p\c\e\d\o\6\m\p\o\g\8\6\9\7\t\5\9\6\i\x\d\i\x\y\b\4\t\k\e\n\l\n\v\8\e\c\r\q\k\t\j\2\d\o\h\d\k\x\w\d\v\2\p\g\t\b\z\2\n\r\f\c\t\7\d\3\c\s\3\h\f\b\k\k\5\q\l\d\z\z\1\4\9\w\5\t\9\a\z\g\9\7\o\6\s\t\k\r\3\y\6\4\m\b\s\5\i\7\7\u\g\v\c\e\v\u\v\l\e\e\9\i\w\y\b\g\t\p\3\y\o\f\e\e\8\j\6\4\z\k\i\l\8\b\m\k\0\0\i\1\6\v\v\p\w\z\8\1\i\6\3\e\0\y\f\w\u\k\g\y\4 ]] 00:08:57.411 19:28:50 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:57.411 19:28:50 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:57.411 [2024-12-05 19:28:50.794091] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:08:57.411 [2024-12-05 19:28:50.794240] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60559 ] 00:08:57.669 [2024-12-05 19:28:50.946719] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:57.669 [2024-12-05 19:28:51.007967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:57.669 [2024-12-05 19:28:51.069071] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:57.927  [2024-12-05T19:28:51.364Z] Copying: 512/512 [B] (average 250 kBps) 00:08:57.927 00:08:57.927 19:28:51 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ zav7h7azaov8mznlz2ihyvbj39vewag7qi8qbqjpef7wro1a42rj5fq8kc17gxfftooutehlnjquddpj8uw23ehhkyxz9ybjmri5v2yefxvmt2si3ug30q3zjs6f8xrl4bxza2r77yipzufwac47jx75552hdrp8rmzadjdo6g8hg6shl4276p18iopyozcvleg4trmev6cnc13u0rnln63ec9yp08ywk1w0g8e23qj105rincslft8tbaqu8kmqb0wl7ziv0yelewafxxvjlrblho3ghvxl9e007smx0x05zaqgnps4rmzil8w3x1wh6zvn1j7jzat5uookkzajkpcedo6mpog8697t596ixdixyb4tkenlnv8ecrqktj2dohdkxwdv2pgtbz2nrfct7d3cs3hfbkk5qldzz149w5t9azg97o6stkr3y64mbs5i77ugvcevuvlee9iwybgtp3yofee8j64zkil8bmk00i16vvpwz81i63e0yfwukgy4 == \z\a\v\7\h\7\a\z\a\o\v\8\m\z\n\l\z\2\i\h\y\v\b\j\3\9\v\e\w\a\g\7\q\i\8\q\b\q\j\p\e\f\7\w\r\o\1\a\4\2\r\j\5\f\q\8\k\c\1\7\g\x\f\f\t\o\o\u\t\e\h\l\n\j\q\u\d\d\p\j\8\u\w\2\3\e\h\h\k\y\x\z\9\y\b\j\m\r\i\5\v\2\y\e\f\x\v\m\t\2\s\i\3\u\g\3\0\q\3\z\j\s\6\f\8\x\r\l\4\b\x\z\a\2\r\7\7\y\i\p\z\u\f\w\a\c\4\7\j\x\7\5\5\5\2\h\d\r\p\8\r\m\z\a\d\j\d\o\6\g\8\h\g\6\s\h\l\4\2\7\6\p\1\8\i\o\p\y\o\z\c\v\l\e\g\4\t\r\m\e\v\6\c\n\c\1\3\u\0\r\n\l\n\6\3\e\c\9\y\p\0\8\y\w\k\1\w\0\g\8\e\2\3\q\j\1\0\5\r\i\n\c\s\l\f\t\8\t\b\a\q\u\8\k\m\q\b\0\w\l\7\z\i\v\0\y\e\l\e\w\a\f\x\x\v\j\l\r\b\l\h\o\3\g\h\v\x\l\9\e\0\0\7\s\m\x\0\x\0\5\z\a\q\g\n\p\s\4\r\m\z\i\l\8\w\3\x\1\w\h\6\z\v\n\1\j\7\j\z\a\t\5\u\o\o\k\k\z\a\j\k\p\c\e\d\o\6\m\p\o\g\8\6\9\7\t\5\9\6\i\x\d\i\x\y\b\4\t\k\e\n\l\n\v\8\e\c\r\q\k\t\j\2\d\o\h\d\k\x\w\d\v\2\p\g\t\b\z\2\n\r\f\c\t\7\d\3\c\s\3\h\f\b\k\k\5\q\l\d\z\z\1\4\9\w\5\t\9\a\z\g\9\7\o\6\s\t\k\r\3\y\6\4\m\b\s\5\i\7\7\u\g\v\c\e\v\u\v\l\e\e\9\i\w\y\b\g\t\p\3\y\o\f\e\e\8\j\6\4\z\k\i\l\8\b\m\k\0\0\i\1\6\v\v\p\w\z\8\1\i\6\3\e\0\y\f\w\u\k\g\y\4 ]] 00:08:57.927 19:28:51 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:57.927 19:28:51 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:08:57.927 19:28:51 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:57.927 19:28:51 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:57.927 19:28:51 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:57.927 19:28:51 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:58.185 [2024-12-05 19:28:51.395871] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:08:58.185 [2024-12-05 19:28:51.395977] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60566 ] 00:08:58.185 [2024-12-05 19:28:51.545967] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:58.185 [2024-12-05 19:28:51.600864] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:58.444 [2024-12-05 19:28:51.656886] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:58.444  [2024-12-05T19:28:52.139Z] Copying: 512/512 [B] (average 500 kBps) 00:08:58.702 00:08:58.702 19:28:51 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ af9un92i99k0x5msxkzbf6mya2gn3flyioc18t6zf4uq8xwf8n0fwzy5j87ob3upmr1xuvog0uzuov0v8qfoekk143gu15aacagkgapc4fy9burotrzt4lovx4prh2r7bfjo1kzhb0it3tmun4krvdq2usizr7243867duyxq6ozpcbewlqrfwev3xsmek4nzlxjlze4stjq24ic15s2ms11pvxb43rnv914jarhi2vund14tkkj1zd748p30p6ks19rcl1hbfugg3hcwm0a79p6x86jx1etooq8d96sl2w6o7qai49oadlxy1778dr5s3154lj29a45b7c4fp5ejs5712gfd8m7tr8uqg2rd64al9ffj6de55wliu0ts3sbd5axqsyli5xutf3pa4v3omfpwcvj1pbyncmu7g07v0gp58ix4nllqg90cnnr44zemao7uo3yl9kp4qndsuliltfpu0vnuv2oqpyq1xvc45lwptvz00mhclos5yi7yrur == \a\f\9\u\n\9\2\i\9\9\k\0\x\5\m\s\x\k\z\b\f\6\m\y\a\2\g\n\3\f\l\y\i\o\c\1\8\t\6\z\f\4\u\q\8\x\w\f\8\n\0\f\w\z\y\5\j\8\7\o\b\3\u\p\m\r\1\x\u\v\o\g\0\u\z\u\o\v\0\v\8\q\f\o\e\k\k\1\4\3\g\u\1\5\a\a\c\a\g\k\g\a\p\c\4\f\y\9\b\u\r\o\t\r\z\t\4\l\o\v\x\4\p\r\h\2\r\7\b\f\j\o\1\k\z\h\b\0\i\t\3\t\m\u\n\4\k\r\v\d\q\2\u\s\i\z\r\7\2\4\3\8\6\7\d\u\y\x\q\6\o\z\p\c\b\e\w\l\q\r\f\w\e\v\3\x\s\m\e\k\4\n\z\l\x\j\l\z\e\4\s\t\j\q\2\4\i\c\1\5\s\2\m\s\1\1\p\v\x\b\4\3\r\n\v\9\1\4\j\a\r\h\i\2\v\u\n\d\1\4\t\k\k\j\1\z\d\7\4\8\p\3\0\p\6\k\s\1\9\r\c\l\1\h\b\f\u\g\g\3\h\c\w\m\0\a\7\9\p\6\x\8\6\j\x\1\e\t\o\o\q\8\d\9\6\s\l\2\w\6\o\7\q\a\i\4\9\o\a\d\l\x\y\1\7\7\8\d\r\5\s\3\1\5\4\l\j\2\9\a\4\5\b\7\c\4\f\p\5\e\j\s\5\7\1\2\g\f\d\8\m\7\t\r\8\u\q\g\2\r\d\6\4\a\l\9\f\f\j\6\d\e\5\5\w\l\i\u\0\t\s\3\s\b\d\5\a\x\q\s\y\l\i\5\x\u\t\f\3\p\a\4\v\3\o\m\f\p\w\c\v\j\1\p\b\y\n\c\m\u\7\g\0\7\v\0\g\p\5\8\i\x\4\n\l\l\q\g\9\0\c\n\n\r\4\4\z\e\m\a\o\7\u\o\3\y\l\9\k\p\4\q\n\d\s\u\l\i\l\t\f\p\u\0\v\n\u\v\2\o\q\p\y\q\1\x\v\c\4\5\l\w\p\t\v\z\0\0\m\h\c\l\o\s\5\y\i\7\y\r\u\r ]] 00:08:58.702 19:28:51 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:58.702 19:28:51 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:08:58.702 [2024-12-05 19:28:51.978170] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:08:58.702 [2024-12-05 19:28:51.978263] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60574 ] 00:08:58.702 [2024-12-05 19:28:52.124881] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:58.961 [2024-12-05 19:28:52.183592] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:58.961 [2024-12-05 19:28:52.240985] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:58.961  [2024-12-05T19:28:52.657Z] Copying: 512/512 [B] (average 500 kBps) 00:08:59.220 00:08:59.220 19:28:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ af9un92i99k0x5msxkzbf6mya2gn3flyioc18t6zf4uq8xwf8n0fwzy5j87ob3upmr1xuvog0uzuov0v8qfoekk143gu15aacagkgapc4fy9burotrzt4lovx4prh2r7bfjo1kzhb0it3tmun4krvdq2usizr7243867duyxq6ozpcbewlqrfwev3xsmek4nzlxjlze4stjq24ic15s2ms11pvxb43rnv914jarhi2vund14tkkj1zd748p30p6ks19rcl1hbfugg3hcwm0a79p6x86jx1etooq8d96sl2w6o7qai49oadlxy1778dr5s3154lj29a45b7c4fp5ejs5712gfd8m7tr8uqg2rd64al9ffj6de55wliu0ts3sbd5axqsyli5xutf3pa4v3omfpwcvj1pbyncmu7g07v0gp58ix4nllqg90cnnr44zemao7uo3yl9kp4qndsuliltfpu0vnuv2oqpyq1xvc45lwptvz00mhclos5yi7yrur == \a\f\9\u\n\9\2\i\9\9\k\0\x\5\m\s\x\k\z\b\f\6\m\y\a\2\g\n\3\f\l\y\i\o\c\1\8\t\6\z\f\4\u\q\8\x\w\f\8\n\0\f\w\z\y\5\j\8\7\o\b\3\u\p\m\r\1\x\u\v\o\g\0\u\z\u\o\v\0\v\8\q\f\o\e\k\k\1\4\3\g\u\1\5\a\a\c\a\g\k\g\a\p\c\4\f\y\9\b\u\r\o\t\r\z\t\4\l\o\v\x\4\p\r\h\2\r\7\b\f\j\o\1\k\z\h\b\0\i\t\3\t\m\u\n\4\k\r\v\d\q\2\u\s\i\z\r\7\2\4\3\8\6\7\d\u\y\x\q\6\o\z\p\c\b\e\w\l\q\r\f\w\e\v\3\x\s\m\e\k\4\n\z\l\x\j\l\z\e\4\s\t\j\q\2\4\i\c\1\5\s\2\m\s\1\1\p\v\x\b\4\3\r\n\v\9\1\4\j\a\r\h\i\2\v\u\n\d\1\4\t\k\k\j\1\z\d\7\4\8\p\3\0\p\6\k\s\1\9\r\c\l\1\h\b\f\u\g\g\3\h\c\w\m\0\a\7\9\p\6\x\8\6\j\x\1\e\t\o\o\q\8\d\9\6\s\l\2\w\6\o\7\q\a\i\4\9\o\a\d\l\x\y\1\7\7\8\d\r\5\s\3\1\5\4\l\j\2\9\a\4\5\b\7\c\4\f\p\5\e\j\s\5\7\1\2\g\f\d\8\m\7\t\r\8\u\q\g\2\r\d\6\4\a\l\9\f\f\j\6\d\e\5\5\w\l\i\u\0\t\s\3\s\b\d\5\a\x\q\s\y\l\i\5\x\u\t\f\3\p\a\4\v\3\o\m\f\p\w\c\v\j\1\p\b\y\n\c\m\u\7\g\0\7\v\0\g\p\5\8\i\x\4\n\l\l\q\g\9\0\c\n\n\r\4\4\z\e\m\a\o\7\u\o\3\y\l\9\k\p\4\q\n\d\s\u\l\i\l\t\f\p\u\0\v\n\u\v\2\o\q\p\y\q\1\x\v\c\4\5\l\w\p\t\v\z\0\0\m\h\c\l\o\s\5\y\i\7\y\r\u\r ]] 00:08:59.220 19:28:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:59.220 19:28:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:08:59.220 [2024-12-05 19:28:52.562753] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:08:59.220 [2024-12-05 19:28:52.563050] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60581 ] 00:08:59.479 [2024-12-05 19:28:52.713363] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:59.479 [2024-12-05 19:28:52.768422] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:59.479 [2024-12-05 19:28:52.825142] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:59.479  [2024-12-05T19:28:53.175Z] Copying: 512/512 [B] (average 166 kBps) 00:08:59.738 00:08:59.738 19:28:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ af9un92i99k0x5msxkzbf6mya2gn3flyioc18t6zf4uq8xwf8n0fwzy5j87ob3upmr1xuvog0uzuov0v8qfoekk143gu15aacagkgapc4fy9burotrzt4lovx4prh2r7bfjo1kzhb0it3tmun4krvdq2usizr7243867duyxq6ozpcbewlqrfwev3xsmek4nzlxjlze4stjq24ic15s2ms11pvxb43rnv914jarhi2vund14tkkj1zd748p30p6ks19rcl1hbfugg3hcwm0a79p6x86jx1etooq8d96sl2w6o7qai49oadlxy1778dr5s3154lj29a45b7c4fp5ejs5712gfd8m7tr8uqg2rd64al9ffj6de55wliu0ts3sbd5axqsyli5xutf3pa4v3omfpwcvj1pbyncmu7g07v0gp58ix4nllqg90cnnr44zemao7uo3yl9kp4qndsuliltfpu0vnuv2oqpyq1xvc45lwptvz00mhclos5yi7yrur == \a\f\9\u\n\9\2\i\9\9\k\0\x\5\m\s\x\k\z\b\f\6\m\y\a\2\g\n\3\f\l\y\i\o\c\1\8\t\6\z\f\4\u\q\8\x\w\f\8\n\0\f\w\z\y\5\j\8\7\o\b\3\u\p\m\r\1\x\u\v\o\g\0\u\z\u\o\v\0\v\8\q\f\o\e\k\k\1\4\3\g\u\1\5\a\a\c\a\g\k\g\a\p\c\4\f\y\9\b\u\r\o\t\r\z\t\4\l\o\v\x\4\p\r\h\2\r\7\b\f\j\o\1\k\z\h\b\0\i\t\3\t\m\u\n\4\k\r\v\d\q\2\u\s\i\z\r\7\2\4\3\8\6\7\d\u\y\x\q\6\o\z\p\c\b\e\w\l\q\r\f\w\e\v\3\x\s\m\e\k\4\n\z\l\x\j\l\z\e\4\s\t\j\q\2\4\i\c\1\5\s\2\m\s\1\1\p\v\x\b\4\3\r\n\v\9\1\4\j\a\r\h\i\2\v\u\n\d\1\4\t\k\k\j\1\z\d\7\4\8\p\3\0\p\6\k\s\1\9\r\c\l\1\h\b\f\u\g\g\3\h\c\w\m\0\a\7\9\p\6\x\8\6\j\x\1\e\t\o\o\q\8\d\9\6\s\l\2\w\6\o\7\q\a\i\4\9\o\a\d\l\x\y\1\7\7\8\d\r\5\s\3\1\5\4\l\j\2\9\a\4\5\b\7\c\4\f\p\5\e\j\s\5\7\1\2\g\f\d\8\m\7\t\r\8\u\q\g\2\r\d\6\4\a\l\9\f\f\j\6\d\e\5\5\w\l\i\u\0\t\s\3\s\b\d\5\a\x\q\s\y\l\i\5\x\u\t\f\3\p\a\4\v\3\o\m\f\p\w\c\v\j\1\p\b\y\n\c\m\u\7\g\0\7\v\0\g\p\5\8\i\x\4\n\l\l\q\g\9\0\c\n\n\r\4\4\z\e\m\a\o\7\u\o\3\y\l\9\k\p\4\q\n\d\s\u\l\i\l\t\f\p\u\0\v\n\u\v\2\o\q\p\y\q\1\x\v\c\4\5\l\w\p\t\v\z\0\0\m\h\c\l\o\s\5\y\i\7\y\r\u\r ]] 00:08:59.738 19:28:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:59.738 19:28:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:59.738 [2024-12-05 19:28:53.141086] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:08:59.738 [2024-12-05 19:28:53.141213] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60589 ] 00:08:59.997 [2024-12-05 19:28:53.292581] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:59.997 [2024-12-05 19:28:53.350294] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:59.997 [2024-12-05 19:28:53.407181] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:00.256  [2024-12-05T19:28:53.693Z] Copying: 512/512 [B] (average 250 kBps) 00:09:00.256 00:09:00.256 ************************************ 00:09:00.256 END TEST dd_flags_misc_forced_aio 00:09:00.256 ************************************ 00:09:00.256 19:28:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ af9un92i99k0x5msxkzbf6mya2gn3flyioc18t6zf4uq8xwf8n0fwzy5j87ob3upmr1xuvog0uzuov0v8qfoekk143gu15aacagkgapc4fy9burotrzt4lovx4prh2r7bfjo1kzhb0it3tmun4krvdq2usizr7243867duyxq6ozpcbewlqrfwev3xsmek4nzlxjlze4stjq24ic15s2ms11pvxb43rnv914jarhi2vund14tkkj1zd748p30p6ks19rcl1hbfugg3hcwm0a79p6x86jx1etooq8d96sl2w6o7qai49oadlxy1778dr5s3154lj29a45b7c4fp5ejs5712gfd8m7tr8uqg2rd64al9ffj6de55wliu0ts3sbd5axqsyli5xutf3pa4v3omfpwcvj1pbyncmu7g07v0gp58ix4nllqg90cnnr44zemao7uo3yl9kp4qndsuliltfpu0vnuv2oqpyq1xvc45lwptvz00mhclos5yi7yrur == \a\f\9\u\n\9\2\i\9\9\k\0\x\5\m\s\x\k\z\b\f\6\m\y\a\2\g\n\3\f\l\y\i\o\c\1\8\t\6\z\f\4\u\q\8\x\w\f\8\n\0\f\w\z\y\5\j\8\7\o\b\3\u\p\m\r\1\x\u\v\o\g\0\u\z\u\o\v\0\v\8\q\f\o\e\k\k\1\4\3\g\u\1\5\a\a\c\a\g\k\g\a\p\c\4\f\y\9\b\u\r\o\t\r\z\t\4\l\o\v\x\4\p\r\h\2\r\7\b\f\j\o\1\k\z\h\b\0\i\t\3\t\m\u\n\4\k\r\v\d\q\2\u\s\i\z\r\7\2\4\3\8\6\7\d\u\y\x\q\6\o\z\p\c\b\e\w\l\q\r\f\w\e\v\3\x\s\m\e\k\4\n\z\l\x\j\l\z\e\4\s\t\j\q\2\4\i\c\1\5\s\2\m\s\1\1\p\v\x\b\4\3\r\n\v\9\1\4\j\a\r\h\i\2\v\u\n\d\1\4\t\k\k\j\1\z\d\7\4\8\p\3\0\p\6\k\s\1\9\r\c\l\1\h\b\f\u\g\g\3\h\c\w\m\0\a\7\9\p\6\x\8\6\j\x\1\e\t\o\o\q\8\d\9\6\s\l\2\w\6\o\7\q\a\i\4\9\o\a\d\l\x\y\1\7\7\8\d\r\5\s\3\1\5\4\l\j\2\9\a\4\5\b\7\c\4\f\p\5\e\j\s\5\7\1\2\g\f\d\8\m\7\t\r\8\u\q\g\2\r\d\6\4\a\l\9\f\f\j\6\d\e\5\5\w\l\i\u\0\t\s\3\s\b\d\5\a\x\q\s\y\l\i\5\x\u\t\f\3\p\a\4\v\3\o\m\f\p\w\c\v\j\1\p\b\y\n\c\m\u\7\g\0\7\v\0\g\p\5\8\i\x\4\n\l\l\q\g\9\0\c\n\n\r\4\4\z\e\m\a\o\7\u\o\3\y\l\9\k\p\4\q\n\d\s\u\l\i\l\t\f\p\u\0\v\n\u\v\2\o\q\p\y\q\1\x\v\c\4\5\l\w\p\t\v\z\0\0\m\h\c\l\o\s\5\y\i\7\y\r\u\r ]] 00:09:00.256 00:09:00.256 real 0m4.764s 00:09:00.256 user 0m2.530s 00:09:00.256 sys 0m1.240s 00:09:00.256 19:28:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:00.256 19:28:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:09:00.516 19:28:53 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:09:00.516 19:28:53 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:09:00.516 19:28:53 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:09:00.516 ************************************ 00:09:00.516 END TEST spdk_dd_posix 00:09:00.516 ************************************ 00:09:00.516 00:09:00.516 real 0m20.875s 00:09:00.516 user 0m9.982s 00:09:00.516 sys 0m6.835s 00:09:00.516 19:28:53 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:00.516 19:28:53 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:09:00.516 19:28:53 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:09:00.516 19:28:53 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:00.516 19:28:53 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:00.516 19:28:53 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:09:00.516 ************************************ 00:09:00.516 START TEST spdk_dd_malloc 00:09:00.516 ************************************ 00:09:00.516 19:28:53 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:09:00.516 * Looking for test storage... 00:09:00.516 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:09:00.516 19:28:53 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:00.516 19:28:53 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1711 -- # lcov --version 00:09:00.516 19:28:53 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:00.516 19:28:53 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:00.516 19:28:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:00.516 19:28:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:00.516 19:28:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:00.516 19:28:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # IFS=.-: 00:09:00.516 19:28:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # read -ra ver1 00:09:00.516 19:28:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # IFS=.-: 00:09:00.516 19:28:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # read -ra ver2 00:09:00.516 19:28:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@338 -- # local 'op=<' 00:09:00.516 19:28:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@340 -- # ver1_l=2 00:09:00.516 19:28:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@341 -- # ver2_l=1 00:09:00.516 19:28:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:00.516 19:28:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@344 -- # case "$op" in 00:09:00.516 19:28:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@345 -- # : 1 00:09:00.516 19:28:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:00.516 19:28:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:00.516 19:28:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # decimal 1 00:09:00.776 19:28:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=1 00:09:00.776 19:28:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:00.776 19:28:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 1 00:09:00.776 19:28:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:00.776 19:28:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # decimal 2 00:09:00.776 19:28:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=2 00:09:00.776 19:28:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:00.776 19:28:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 2 00:09:00.776 19:28:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:00.776 19:28:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:00.776 19:28:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:00.776 19:28:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # return 0 00:09:00.776 19:28:53 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:00.776 19:28:53 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:00.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:00.776 --rc genhtml_branch_coverage=1 00:09:00.776 --rc genhtml_function_coverage=1 00:09:00.776 --rc genhtml_legend=1 00:09:00.776 --rc geninfo_all_blocks=1 00:09:00.776 --rc geninfo_unexecuted_blocks=1 00:09:00.776 00:09:00.776 ' 00:09:00.777 19:28:53 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:00.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:00.777 --rc genhtml_branch_coverage=1 00:09:00.777 --rc genhtml_function_coverage=1 00:09:00.777 --rc genhtml_legend=1 00:09:00.777 --rc geninfo_all_blocks=1 00:09:00.777 --rc geninfo_unexecuted_blocks=1 00:09:00.777 00:09:00.777 ' 00:09:00.777 19:28:53 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:00.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:00.777 --rc genhtml_branch_coverage=1 00:09:00.777 --rc genhtml_function_coverage=1 00:09:00.777 --rc genhtml_legend=1 00:09:00.777 --rc geninfo_all_blocks=1 00:09:00.777 --rc geninfo_unexecuted_blocks=1 00:09:00.777 00:09:00.777 ' 00:09:00.777 19:28:53 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:00.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:00.777 --rc genhtml_branch_coverage=1 00:09:00.777 --rc genhtml_function_coverage=1 00:09:00.777 --rc genhtml_legend=1 00:09:00.777 --rc geninfo_all_blocks=1 00:09:00.777 --rc geninfo_unexecuted_blocks=1 00:09:00.777 00:09:00.777 ' 00:09:00.777 19:28:53 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:00.777 19:28:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@15 -- # shopt -s extglob 00:09:00.777 19:28:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:00.777 19:28:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:00.777 19:28:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:00.777 19:28:53 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.777 19:28:53 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.777 19:28:53 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.777 19:28:53 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:09:00.777 19:28:53 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.777 19:28:53 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:09:00.777 19:28:53 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:00.777 19:28:53 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:00.777 19:28:53 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:09:00.777 ************************************ 00:09:00.777 START TEST dd_malloc_copy 00:09:00.777 ************************************ 00:09:00.777 19:28:53 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1129 -- # malloc_copy 00:09:00.777 19:28:53 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:09:00.777 19:28:53 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:09:00.777 19:28:53 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:09:00.777 19:28:53 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:09:00.777 19:28:53 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:09:00.777 19:28:53 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:09:00.777 19:28:53 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:09:00.777 19:28:53 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:09:00.777 19:28:53 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:09:00.777 19:28:53 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:09:00.777 [2024-12-05 19:28:54.041161] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:09:00.777 [2024-12-05 19:28:54.041434] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60671 ] 00:09:00.777 { 00:09:00.777 "subsystems": [ 00:09:00.777 { 00:09:00.777 "subsystem": "bdev", 00:09:00.777 "config": [ 00:09:00.777 { 00:09:00.777 "params": { 00:09:00.777 "block_size": 512, 00:09:00.777 "num_blocks": 1048576, 00:09:00.777 "name": "malloc0" 00:09:00.777 }, 00:09:00.777 "method": "bdev_malloc_create" 00:09:00.777 }, 00:09:00.777 { 00:09:00.777 "params": { 00:09:00.777 "block_size": 512, 00:09:00.777 "num_blocks": 1048576, 00:09:00.777 "name": "malloc1" 00:09:00.777 }, 00:09:00.777 "method": "bdev_malloc_create" 00:09:00.777 }, 00:09:00.777 { 00:09:00.777 "method": "bdev_wait_for_examine" 00:09:00.777 } 00:09:00.777 ] 00:09:00.777 } 00:09:00.777 ] 00:09:00.777 } 00:09:00.777 [2024-12-05 19:28:54.194822] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:01.036 [2024-12-05 19:28:54.255973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:01.036 [2024-12-05 19:28:54.315498] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:02.413  [2024-12-05T19:28:56.786Z] Copying: 210/512 [MB] (210 MBps) [2024-12-05T19:28:57.354Z] Copying: 427/512 [MB] (216 MBps) [2024-12-05T19:28:57.922Z] Copying: 512/512 [MB] (average 213 MBps) 00:09:04.485 00:09:04.485 19:28:57 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:09:04.485 19:28:57 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:09:04.485 19:28:57 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:09:04.485 19:28:57 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:09:04.485 { 00:09:04.485 "subsystems": [ 00:09:04.485 { 00:09:04.485 "subsystem": "bdev", 00:09:04.485 "config": [ 00:09:04.485 { 00:09:04.485 "params": { 00:09:04.485 "block_size": 512, 00:09:04.485 "num_blocks": 1048576, 00:09:04.485 "name": "malloc0" 00:09:04.485 }, 00:09:04.485 "method": "bdev_malloc_create" 00:09:04.485 }, 00:09:04.485 { 00:09:04.485 "params": { 00:09:04.485 "block_size": 512, 00:09:04.485 "num_blocks": 1048576, 00:09:04.485 "name": "malloc1" 00:09:04.485 }, 00:09:04.485 "method": "bdev_malloc_create" 00:09:04.485 }, 00:09:04.485 { 00:09:04.485 "method": "bdev_wait_for_examine" 00:09:04.485 } 00:09:04.485 ] 00:09:04.485 } 00:09:04.485 ] 00:09:04.485 } 00:09:04.485 [2024-12-05 19:28:57.705404] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:09:04.485 [2024-12-05 19:28:57.705719] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60718 ] 00:09:04.485 [2024-12-05 19:28:57.850881] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:04.485 [2024-12-05 19:28:57.901339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:04.743 [2024-12-05 19:28:57.954744] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:06.120  [2024-12-05T19:29:00.516Z] Copying: 218/512 [MB] (218 MBps) [2024-12-05T19:29:00.779Z] Copying: 440/512 [MB] (222 MBps) [2024-12-05T19:29:01.347Z] Copying: 512/512 [MB] (average 220 MBps) 00:09:07.910 00:09:07.910 ************************************ 00:09:07.910 END TEST dd_malloc_copy 00:09:07.910 ************************************ 00:09:07.910 00:09:07.910 real 0m7.200s 00:09:07.910 user 0m6.166s 00:09:07.910 sys 0m0.861s 00:09:07.910 19:29:01 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:07.910 19:29:01 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:09:07.910 ************************************ 00:09:07.910 END TEST spdk_dd_malloc 00:09:07.910 ************************************ 00:09:07.910 00:09:07.910 real 0m7.457s 00:09:07.910 user 0m6.321s 00:09:07.910 sys 0m0.965s 00:09:07.910 19:29:01 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:07.910 19:29:01 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:09:07.910 19:29:01 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:09:07.910 19:29:01 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:07.910 19:29:01 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:07.910 19:29:01 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:09:07.910 ************************************ 00:09:07.910 START TEST spdk_dd_bdev_to_bdev 00:09:07.910 ************************************ 00:09:07.910 19:29:01 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:09:08.169 * Looking for test storage... 00:09:08.169 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:09:08.169 19:29:01 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:08.169 19:29:01 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1711 -- # lcov --version 00:09:08.169 19:29:01 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:08.169 19:29:01 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:08.169 19:29:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:08.169 19:29:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:08.169 19:29:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:08.169 19:29:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # IFS=.-: 00:09:08.169 19:29:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # read -ra ver1 00:09:08.169 19:29:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # IFS=.-: 00:09:08.169 19:29:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # read -ra ver2 00:09:08.169 19:29:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@338 -- # local 'op=<' 00:09:08.169 19:29:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@340 -- # ver1_l=2 00:09:08.169 19:29:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@341 -- # ver2_l=1 00:09:08.169 19:29:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:08.169 19:29:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@344 -- # case "$op" in 00:09:08.169 19:29:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@345 -- # : 1 00:09:08.169 19:29:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:08.169 19:29:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:08.169 19:29:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # decimal 1 00:09:08.169 19:29:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=1 00:09:08.169 19:29:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:08.169 19:29:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 1 00:09:08.169 19:29:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # ver1[v]=1 00:09:08.169 19:29:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # decimal 2 00:09:08.169 19:29:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=2 00:09:08.169 19:29:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:08.169 19:29:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 2 00:09:08.169 19:29:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # ver2[v]=2 00:09:08.169 19:29:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:08.169 19:29:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:08.169 19:29:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # return 0 00:09:08.169 19:29:01 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:08.169 19:29:01 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:08.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.169 --rc genhtml_branch_coverage=1 00:09:08.169 --rc genhtml_function_coverage=1 00:09:08.169 --rc genhtml_legend=1 00:09:08.169 --rc geninfo_all_blocks=1 00:09:08.169 --rc geninfo_unexecuted_blocks=1 00:09:08.169 00:09:08.169 ' 00:09:08.169 19:29:01 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:08.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.169 --rc genhtml_branch_coverage=1 00:09:08.169 --rc genhtml_function_coverage=1 00:09:08.169 --rc genhtml_legend=1 00:09:08.169 --rc geninfo_all_blocks=1 00:09:08.169 --rc geninfo_unexecuted_blocks=1 00:09:08.169 00:09:08.169 ' 00:09:08.169 19:29:01 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:08.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.170 --rc genhtml_branch_coverage=1 00:09:08.170 --rc genhtml_function_coverage=1 00:09:08.170 --rc genhtml_legend=1 00:09:08.170 --rc geninfo_all_blocks=1 00:09:08.170 --rc geninfo_unexecuted_blocks=1 00:09:08.170 00:09:08.170 ' 00:09:08.170 19:29:01 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:08.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.170 --rc genhtml_branch_coverage=1 00:09:08.170 --rc genhtml_function_coverage=1 00:09:08.170 --rc genhtml_legend=1 00:09:08.170 --rc geninfo_all_blocks=1 00:09:08.170 --rc geninfo_unexecuted_blocks=1 00:09:08.170 00:09:08.170 ' 00:09:08.170 19:29:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:08.170 19:29:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@15 -- # shopt -s extglob 00:09:08.170 19:29:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:08.170 19:29:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:08.170 19:29:01 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:08.170 19:29:01 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.170 19:29:01 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.170 19:29:01 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.170 19:29:01 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:09:08.170 19:29:01 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.170 19:29:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:09:08.170 19:29:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:09:08.170 19:29:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:09:08.170 19:29:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:09:08.170 19:29:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:09:08.170 19:29:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:09:08.170 19:29:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 00:09:08.170 19:29:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:09:08.170 19:29:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:09:08.170 19:29:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 00:09:08.170 19:29:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:09:08.170 19:29:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:09:08.170 19:29:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 00:09:08.170 19:29:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:09:08.170 19:29:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:08.170 19:29:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:08.170 19:29:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:09:08.170 19:29:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:09:08.170 19:29:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:09:08.170 19:29:01 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:09:08.170 19:29:01 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:08.170 19:29:01 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:09:08.170 ************************************ 00:09:08.170 START TEST dd_inflate_file 00:09:08.170 ************************************ 00:09:08.170 19:29:01 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:09:08.170 [2024-12-05 19:29:01.545357] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:09:08.170 [2024-12-05 19:29:01.545731] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60831 ] 00:09:08.429 [2024-12-05 19:29:01.694038] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:08.429 [2024-12-05 19:29:01.758584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.429 [2024-12-05 19:29:01.812292] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:08.688  [2024-12-05T19:29:02.125Z] Copying: 64/64 [MB] (average 1560 MBps) 00:09:08.688 00:09:08.688 00:09:08.688 real 0m0.584s 00:09:08.688 user 0m0.340s 00:09:08.688 sys 0m0.301s 00:09:08.688 19:29:02 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:08.688 19:29:02 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:09:08.688 ************************************ 00:09:08.688 END TEST dd_inflate_file 00:09:08.688 ************************************ 00:09:08.688 19:29:02 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:09:08.688 19:29:02 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:09:08.688 19:29:02 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:09:08.688 19:29:02 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:09:08.688 19:29:02 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:09:08.688 19:29:02 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:09:08.688 19:29:02 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:08.688 19:29:02 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:09:08.688 19:29:02 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:09:08.946 ************************************ 00:09:08.946 START TEST dd_copy_to_out_bdev 00:09:08.946 ************************************ 00:09:08.946 19:29:02 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:09:08.946 [2024-12-05 19:29:02.178863] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:09:08.946 [2024-12-05 19:29:02.179468] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60870 ] 00:09:08.946 { 00:09:08.946 "subsystems": [ 00:09:08.946 { 00:09:08.946 "subsystem": "bdev", 00:09:08.946 "config": [ 00:09:08.946 { 00:09:08.946 "params": { 00:09:08.946 "trtype": "pcie", 00:09:08.946 "traddr": "0000:00:10.0", 00:09:08.946 "name": "Nvme0" 00:09:08.946 }, 00:09:08.946 "method": "bdev_nvme_attach_controller" 00:09:08.946 }, 00:09:08.946 { 00:09:08.946 "params": { 00:09:08.946 "trtype": "pcie", 00:09:08.946 "traddr": "0000:00:11.0", 00:09:08.946 "name": "Nvme1" 00:09:08.946 }, 00:09:08.946 "method": "bdev_nvme_attach_controller" 00:09:08.946 }, 00:09:08.946 { 00:09:08.946 "method": "bdev_wait_for_examine" 00:09:08.946 } 00:09:08.946 ] 00:09:08.946 } 00:09:08.946 ] 00:09:08.947 } 00:09:08.947 [2024-12-05 19:29:02.324276] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:08.947 [2024-12-05 19:29:02.382327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.205 [2024-12-05 19:29:02.435556] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:10.581  [2024-12-05T19:29:04.018Z] Copying: 55/64 [MB] (55 MBps) [2024-12-05T19:29:04.277Z] Copying: 64/64 [MB] (average 54 MBps) 00:09:10.840 00:09:10.840 00:09:10.840 real 0m1.902s 00:09:10.840 user 0m1.665s 00:09:10.840 sys 0m1.544s 00:09:10.840 ************************************ 00:09:10.840 END TEST dd_copy_to_out_bdev 00:09:10.840 ************************************ 00:09:10.840 19:29:04 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:10.840 19:29:04 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:09:10.840 19:29:04 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:09:10.840 19:29:04 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:09:10.840 19:29:04 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:10.840 19:29:04 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:10.840 19:29:04 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:09:10.840 ************************************ 00:09:10.840 START TEST dd_offset_magic 00:09:10.840 ************************************ 00:09:10.840 19:29:04 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1129 -- # offset_magic 00:09:10.840 19:29:04 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:09:10.840 19:29:04 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:09:10.840 19:29:04 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:09:10.840 19:29:04 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:09:10.840 19:29:04 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:09:10.840 19:29:04 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:09:10.840 19:29:04 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:09:10.840 19:29:04 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:09:10.840 [2024-12-05 19:29:04.141740] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:09:10.840 [2024-12-05 19:29:04.141833] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60915 ] 00:09:10.840 { 00:09:10.840 "subsystems": [ 00:09:10.840 { 00:09:10.840 "subsystem": "bdev", 00:09:10.840 "config": [ 00:09:10.840 { 00:09:10.840 "params": { 00:09:10.840 "trtype": "pcie", 00:09:10.840 "traddr": "0000:00:10.0", 00:09:10.840 "name": "Nvme0" 00:09:10.840 }, 00:09:10.840 "method": "bdev_nvme_attach_controller" 00:09:10.840 }, 00:09:10.840 { 00:09:10.840 "params": { 00:09:10.840 "trtype": "pcie", 00:09:10.840 "traddr": "0000:00:11.0", 00:09:10.840 "name": "Nvme1" 00:09:10.840 }, 00:09:10.840 "method": "bdev_nvme_attach_controller" 00:09:10.840 }, 00:09:10.840 { 00:09:10.840 "method": "bdev_wait_for_examine" 00:09:10.840 } 00:09:10.840 ] 00:09:10.840 } 00:09:10.840 ] 00:09:10.840 } 00:09:11.099 [2024-12-05 19:29:04.280569] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:11.099 [2024-12-05 19:29:04.339274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:11.099 [2024-12-05 19:29:04.394905] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:11.361  [2024-12-05T19:29:05.057Z] Copying: 65/65 [MB] (average 822 MBps) 00:09:11.621 00:09:11.621 19:29:04 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:09:11.621 19:29:04 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:09:11.621 19:29:04 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:09:11.621 19:29:04 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:09:11.621 { 00:09:11.621 "subsystems": [ 00:09:11.621 { 00:09:11.621 "subsystem": "bdev", 00:09:11.621 "config": [ 00:09:11.621 { 00:09:11.621 "params": { 00:09:11.621 "trtype": "pcie", 00:09:11.621 "traddr": "0000:00:10.0", 00:09:11.621 "name": "Nvme0" 00:09:11.621 }, 00:09:11.621 "method": "bdev_nvme_attach_controller" 00:09:11.621 }, 00:09:11.621 { 00:09:11.621 "params": { 00:09:11.621 "trtype": "pcie", 00:09:11.621 "traddr": "0000:00:11.0", 00:09:11.621 "name": "Nvme1" 00:09:11.621 }, 00:09:11.621 "method": "bdev_nvme_attach_controller" 00:09:11.621 }, 00:09:11.621 { 00:09:11.621 "method": "bdev_wait_for_examine" 00:09:11.621 } 00:09:11.621 ] 00:09:11.621 } 00:09:11.621 ] 00:09:11.621 } 00:09:11.621 [2024-12-05 19:29:04.944902] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:09:11.621 [2024-12-05 19:29:04.945016] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60928 ] 00:09:11.880 [2024-12-05 19:29:05.091685] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:11.880 [2024-12-05 19:29:05.144201] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:11.880 [2024-12-05 19:29:05.201162] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:12.139  [2024-12-05T19:29:05.576Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:09:12.139 00:09:12.139 19:29:05 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:09:12.139 19:29:05 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:09:12.139 19:29:05 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:09:12.139 19:29:05 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:09:12.139 19:29:05 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:09:12.139 19:29:05 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:09:12.139 19:29:05 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:09:12.398 [2024-12-05 19:29:05.629513] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:09:12.398 [2024-12-05 19:29:05.629625] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60946 ] 00:09:12.398 { 00:09:12.398 "subsystems": [ 00:09:12.398 { 00:09:12.398 "subsystem": "bdev", 00:09:12.398 "config": [ 00:09:12.398 { 00:09:12.398 "params": { 00:09:12.398 "trtype": "pcie", 00:09:12.398 "traddr": "0000:00:10.0", 00:09:12.398 "name": "Nvme0" 00:09:12.398 }, 00:09:12.398 "method": "bdev_nvme_attach_controller" 00:09:12.398 }, 00:09:12.398 { 00:09:12.398 "params": { 00:09:12.398 "trtype": "pcie", 00:09:12.398 "traddr": "0000:00:11.0", 00:09:12.398 "name": "Nvme1" 00:09:12.398 }, 00:09:12.398 "method": "bdev_nvme_attach_controller" 00:09:12.398 }, 00:09:12.398 { 00:09:12.398 "method": "bdev_wait_for_examine" 00:09:12.398 } 00:09:12.398 ] 00:09:12.398 } 00:09:12.398 ] 00:09:12.398 } 00:09:12.398 [2024-12-05 19:29:05.777456] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:12.664 [2024-12-05 19:29:05.838023] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:12.664 [2024-12-05 19:29:05.894455] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:12.922  [2024-12-05T19:29:06.618Z] Copying: 65/65 [MB] (average 970 MBps) 00:09:13.181 00:09:13.181 19:29:06 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:09:13.181 19:29:06 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:09:13.181 19:29:06 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:09:13.181 19:29:06 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:09:13.181 [2024-12-05 19:29:06.445799] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:09:13.181 [2024-12-05 19:29:06.445947] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60966 ] 00:09:13.181 { 00:09:13.181 "subsystems": [ 00:09:13.181 { 00:09:13.181 "subsystem": "bdev", 00:09:13.181 "config": [ 00:09:13.181 { 00:09:13.181 "params": { 00:09:13.181 "trtype": "pcie", 00:09:13.181 "traddr": "0000:00:10.0", 00:09:13.181 "name": "Nvme0" 00:09:13.181 }, 00:09:13.181 "method": "bdev_nvme_attach_controller" 00:09:13.181 }, 00:09:13.181 { 00:09:13.181 "params": { 00:09:13.181 "trtype": "pcie", 00:09:13.181 "traddr": "0000:00:11.0", 00:09:13.181 "name": "Nvme1" 00:09:13.181 }, 00:09:13.181 "method": "bdev_nvme_attach_controller" 00:09:13.181 }, 00:09:13.181 { 00:09:13.181 "method": "bdev_wait_for_examine" 00:09:13.181 } 00:09:13.181 ] 00:09:13.181 } 00:09:13.181 ] 00:09:13.181 } 00:09:13.181 [2024-12-05 19:29:06.593016] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:13.440 [2024-12-05 19:29:06.650157] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:13.440 [2024-12-05 19:29:06.702823] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:13.697  [2024-12-05T19:29:07.134Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:09:13.697 00:09:13.697 19:29:07 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:09:13.697 19:29:07 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:09:13.697 00:09:13.697 real 0m2.990s 00:09:13.697 user 0m2.184s 00:09:13.697 sys 0m0.911s 00:09:13.697 19:29:07 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:13.697 19:29:07 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:09:13.697 ************************************ 00:09:13.697 END TEST dd_offset_magic 00:09:13.697 ************************************ 00:09:13.697 19:29:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:09:13.697 19:29:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:09:13.697 19:29:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:09:13.697 19:29:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:09:13.697 19:29:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:09:13.697 19:29:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:09:13.697 19:29:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:09:13.697 19:29:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:09:13.697 19:29:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:09:13.697 19:29:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:09:13.697 19:29:07 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:09:13.956 [2024-12-05 19:29:07.183733] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:09:13.956 [2024-12-05 19:29:07.184145] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61003 ] 00:09:13.956 { 00:09:13.956 "subsystems": [ 00:09:13.956 { 00:09:13.956 "subsystem": "bdev", 00:09:13.956 "config": [ 00:09:13.956 { 00:09:13.956 "params": { 00:09:13.956 "trtype": "pcie", 00:09:13.956 "traddr": "0000:00:10.0", 00:09:13.956 "name": "Nvme0" 00:09:13.956 }, 00:09:13.956 "method": "bdev_nvme_attach_controller" 00:09:13.956 }, 00:09:13.956 { 00:09:13.956 "params": { 00:09:13.956 "trtype": "pcie", 00:09:13.956 "traddr": "0000:00:11.0", 00:09:13.956 "name": "Nvme1" 00:09:13.956 }, 00:09:13.956 "method": "bdev_nvme_attach_controller" 00:09:13.956 }, 00:09:13.956 { 00:09:13.956 "method": "bdev_wait_for_examine" 00:09:13.956 } 00:09:13.956 ] 00:09:13.956 } 00:09:13.956 ] 00:09:13.956 } 00:09:13.956 [2024-12-05 19:29:07.329897] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:13.956 [2024-12-05 19:29:07.380229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:14.215 [2024-12-05 19:29:07.435069] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:14.215  [2024-12-05T19:29:07.910Z] Copying: 5120/5120 [kB] (average 1000 MBps) 00:09:14.473 00:09:14.473 19:29:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:09:14.473 19:29:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:09:14.473 19:29:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:09:14.473 19:29:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:09:14.473 19:29:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:09:14.473 19:29:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:09:14.473 19:29:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:09:14.473 19:29:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:09:14.473 19:29:07 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:09:14.473 19:29:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:09:14.473 { 00:09:14.473 "subsystems": [ 00:09:14.473 { 00:09:14.473 "subsystem": "bdev", 00:09:14.473 "config": [ 00:09:14.473 { 00:09:14.473 "params": { 00:09:14.473 "trtype": "pcie", 00:09:14.473 "traddr": "0000:00:10.0", 00:09:14.473 "name": "Nvme0" 00:09:14.473 }, 00:09:14.473 "method": "bdev_nvme_attach_controller" 00:09:14.473 }, 00:09:14.473 { 00:09:14.473 "params": { 00:09:14.473 "trtype": "pcie", 00:09:14.473 "traddr": "0000:00:11.0", 00:09:14.473 "name": "Nvme1" 00:09:14.473 }, 00:09:14.473 "method": "bdev_nvme_attach_controller" 00:09:14.473 }, 00:09:14.473 { 00:09:14.473 "method": "bdev_wait_for_examine" 00:09:14.473 } 00:09:14.473 ] 00:09:14.473 } 00:09:14.473 ] 00:09:14.473 } 00:09:14.473 [2024-12-05 19:29:07.871967] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:09:14.473 [2024-12-05 19:29:07.872250] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61013 ] 00:09:14.780 [2024-12-05 19:29:08.018886] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:14.780 [2024-12-05 19:29:08.083989] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:14.780 [2024-12-05 19:29:08.137865] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:15.038  [2024-12-05T19:29:08.738Z] Copying: 5120/5120 [kB] (average 833 MBps) 00:09:15.301 00:09:15.301 19:29:08 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:09:15.301 ************************************ 00:09:15.301 END TEST spdk_dd_bdev_to_bdev 00:09:15.301 ************************************ 00:09:15.301 00:09:15.301 real 0m7.258s 00:09:15.301 user 0m5.343s 00:09:15.301 sys 0m3.477s 00:09:15.301 19:29:08 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:15.301 19:29:08 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:09:15.301 19:29:08 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:09:15.301 19:29:08 spdk_dd -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:09:15.301 19:29:08 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:15.301 19:29:08 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:15.301 19:29:08 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:09:15.301 ************************************ 00:09:15.301 START TEST spdk_dd_uring 00:09:15.301 ************************************ 00:09:15.301 19:29:08 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:09:15.301 * Looking for test storage... 00:09:15.301 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:09:15.301 19:29:08 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:15.301 19:29:08 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1711 -- # lcov --version 00:09:15.301 19:29:08 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:15.560 19:29:08 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:15.560 19:29:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:15.560 19:29:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:15.560 19:29:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:15.560 19:29:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # IFS=.-: 00:09:15.560 19:29:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # read -ra ver1 00:09:15.560 19:29:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # IFS=.-: 00:09:15.560 19:29:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # read -ra ver2 00:09:15.560 19:29:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@338 -- # local 'op=<' 00:09:15.560 19:29:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@340 -- # ver1_l=2 00:09:15.560 19:29:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@341 -- # ver2_l=1 00:09:15.560 19:29:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:15.560 19:29:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@344 -- # case "$op" in 00:09:15.560 19:29:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@345 -- # : 1 00:09:15.560 19:29:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:15.560 19:29:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:15.560 19:29:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # decimal 1 00:09:15.560 19:29:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=1 00:09:15.560 19:29:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:15.560 19:29:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 1 00:09:15.560 19:29:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # ver1[v]=1 00:09:15.560 19:29:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # decimal 2 00:09:15.560 19:29:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=2 00:09:15.560 19:29:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:15.560 19:29:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 2 00:09:15.560 19:29:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # ver2[v]=2 00:09:15.560 19:29:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:15.560 19:29:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:15.560 19:29:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # return 0 00:09:15.560 19:29:08 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:15.560 19:29:08 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:15.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.560 --rc genhtml_branch_coverage=1 00:09:15.560 --rc genhtml_function_coverage=1 00:09:15.560 --rc genhtml_legend=1 00:09:15.560 --rc geninfo_all_blocks=1 00:09:15.560 --rc geninfo_unexecuted_blocks=1 00:09:15.560 00:09:15.560 ' 00:09:15.560 19:29:08 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:15.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.560 --rc genhtml_branch_coverage=1 00:09:15.560 --rc genhtml_function_coverage=1 00:09:15.560 --rc genhtml_legend=1 00:09:15.560 --rc geninfo_all_blocks=1 00:09:15.560 --rc geninfo_unexecuted_blocks=1 00:09:15.560 00:09:15.560 ' 00:09:15.560 19:29:08 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:15.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.560 --rc genhtml_branch_coverage=1 00:09:15.560 --rc genhtml_function_coverage=1 00:09:15.561 --rc genhtml_legend=1 00:09:15.561 --rc geninfo_all_blocks=1 00:09:15.561 --rc geninfo_unexecuted_blocks=1 00:09:15.561 00:09:15.561 ' 00:09:15.561 19:29:08 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:15.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.561 --rc genhtml_branch_coverage=1 00:09:15.561 --rc genhtml_function_coverage=1 00:09:15.561 --rc genhtml_legend=1 00:09:15.561 --rc geninfo_all_blocks=1 00:09:15.561 --rc geninfo_unexecuted_blocks=1 00:09:15.561 00:09:15.561 ' 00:09:15.561 19:29:08 spdk_dd.spdk_dd_uring -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:15.561 19:29:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@15 -- # shopt -s extglob 00:09:15.561 19:29:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:15.561 19:29:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:15.561 19:29:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:15.561 19:29:08 spdk_dd.spdk_dd_uring -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:15.561 19:29:08 spdk_dd.spdk_dd_uring -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:15.561 19:29:08 spdk_dd.spdk_dd_uring -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:15.561 19:29:08 spdk_dd.spdk_dd_uring -- paths/export.sh@5 -- # export PATH 00:09:15.561 19:29:08 spdk_dd.spdk_dd_uring -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:15.561 19:29:08 spdk_dd.spdk_dd_uring -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:09:15.561 19:29:08 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:15.561 19:29:08 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:15.561 19:29:08 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:09:15.561 ************************************ 00:09:15.561 START TEST dd_uring_copy 00:09:15.561 ************************************ 00:09:15.561 19:29:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1129 -- # uring_zram_copy 00:09:15.561 19:29:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@15 -- # local zram_dev_id 00:09:15.561 19:29:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@16 -- # local magic 00:09:15.561 19:29:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:09:15.561 19:29:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:09:15.561 19:29:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@19 -- # local verify_magic 00:09:15.561 19:29:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@21 -- # init_zram 00:09:15.561 19:29:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@159 -- # [[ -e /sys/class/zram-control ]] 00:09:15.561 19:29:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@160 -- # return 00:09:15.561 19:29:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # create_zram_dev 00:09:15.561 19:29:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@164 -- # cat /sys/class/zram-control/hot_add 00:09:15.561 19:29:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # zram_dev_id=1 00:09:15.561 19:29:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:09:15.561 19:29:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@177 -- # local id=1 00:09:15.561 19:29:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@178 -- # local size=512M 00:09:15.561 19:29:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@180 -- # [[ -e /sys/block/zram1 ]] 00:09:15.561 19:29:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@182 -- # echo 512M 00:09:15.561 19:29:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:09:15.561 19:29:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:09:15.561 19:29:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:09:15.561 19:29:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:09:15.561 19:29:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:09:15.561 19:29:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:09:15.561 19:29:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # gen_bytes 1024 00:09:15.561 19:29:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@98 -- # xtrace_disable 00:09:15.561 19:29:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:09:15.561 19:29:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # magic=mlrs95vteflj3uyakk5rcv50lkkoppmk2kk0l872eqlcuem4ixtmlvnfqs3gf7wpm315amdaeff0nkjuygvcstx7jc5m9ohgtehwyt5nyxl4bkv9n8ri898vke9p439zqyx60lp6l2s6ltnxremndhgjad6bxdxxd013zijj7jzliyreetcchywp35bb18v7jflh9qrjed41g7qsd1tz0gpe6jlh129h7xce7qcw8kmfldgryl31m4yibdyttpzofv788wuegex53f8v9yril14thhwltdjxvnz7l3i2fdxxfggxlytwgm5db79sdaakxm3adcng10waeknc7w6jdmgk153i1h47dtj7s807vpf9u1u11hrdnxtlo8660hulqkb7gpw0em56ifhmow3rnlayhm5atfb9d8ow9juo5sk9j11o35rgk47mhhr0dszmhi9ore0fcx41fzgh1kphsfclb7ckuauxewdyqpq4zd9titafzl0wuudssieg7ylv8qjez23jvdld8qw391ar27s5uqk3o234z86zxwgsln7i6xw3bdl5t3b6sgfc8elb88fnzpz8jdrmsrw2kaaxz027928vgmtkyhlce1ew5shfyuxss4aior65bvkc0e0cux3npz76m73pda5oz5e0bxbewc3760nj8bd68gfjbxh5vrgp72ijux0dqn8jpc4ar4yb8mtww18sqe898hkocjitnvc1m389i07g7x4aiwj3xaumextllkzleizy7jlr4udsq8xkm5u714q2u59i8qkfmqgggaa8zc4ho9czylv4qpn409znu6rij1g9ir9s42n4wp3mna3ilieafr91losjb8clj6pvhjof48a99l9h50tfyhy04sll54qgeffebz8r4qcjwj79flt2eoev0oyeqhiypsoxvoz0kog3jd4u5smfroome1eamoq6yduvjolc636hq5hqyywd8gq038qp05nwiryrbei0it6ctin8zeteir9qdmi2s80eyhij 00:09:15.561 19:29:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@42 -- # echo mlrs95vteflj3uyakk5rcv50lkkoppmk2kk0l872eqlcuem4ixtmlvnfqs3gf7wpm315amdaeff0nkjuygvcstx7jc5m9ohgtehwyt5nyxl4bkv9n8ri898vke9p439zqyx60lp6l2s6ltnxremndhgjad6bxdxxd013zijj7jzliyreetcchywp35bb18v7jflh9qrjed41g7qsd1tz0gpe6jlh129h7xce7qcw8kmfldgryl31m4yibdyttpzofv788wuegex53f8v9yril14thhwltdjxvnz7l3i2fdxxfggxlytwgm5db79sdaakxm3adcng10waeknc7w6jdmgk153i1h47dtj7s807vpf9u1u11hrdnxtlo8660hulqkb7gpw0em56ifhmow3rnlayhm5atfb9d8ow9juo5sk9j11o35rgk47mhhr0dszmhi9ore0fcx41fzgh1kphsfclb7ckuauxewdyqpq4zd9titafzl0wuudssieg7ylv8qjez23jvdld8qw391ar27s5uqk3o234z86zxwgsln7i6xw3bdl5t3b6sgfc8elb88fnzpz8jdrmsrw2kaaxz027928vgmtkyhlce1ew5shfyuxss4aior65bvkc0e0cux3npz76m73pda5oz5e0bxbewc3760nj8bd68gfjbxh5vrgp72ijux0dqn8jpc4ar4yb8mtww18sqe898hkocjitnvc1m389i07g7x4aiwj3xaumextllkzleizy7jlr4udsq8xkm5u714q2u59i8qkfmqgggaa8zc4ho9czylv4qpn409znu6rij1g9ir9s42n4wp3mna3ilieafr91losjb8clj6pvhjof48a99l9h50tfyhy04sll54qgeffebz8r4qcjwj79flt2eoev0oyeqhiypsoxvoz0kog3jd4u5smfroome1eamoq6yduvjolc636hq5hqyywd8gq038qp05nwiryrbei0it6ctin8zeteir9qdmi2s80eyhij 00:09:15.561 19:29:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:09:15.561 [2024-12-05 19:29:08.887649] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:09:15.561 [2024-12-05 19:29:08.887969] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61091 ] 00:09:15.819 [2024-12-05 19:29:09.039200] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:15.819 [2024-12-05 19:29:09.096675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:15.819 [2024-12-05 19:29:09.152563] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:16.753  [2024-12-05T19:29:10.450Z] Copying: 511/511 [MB] (average 1060 MBps) 00:09:17.013 00:09:17.013 19:29:10 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:09:17.013 19:29:10 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # gen_conf 00:09:17.013 19:29:10 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:09:17.013 19:29:10 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:09:17.013 [2024-12-05 19:29:10.295652] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:09:17.013 [2024-12-05 19:29:10.295759] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61118 ] 00:09:17.013 { 00:09:17.013 "subsystems": [ 00:09:17.013 { 00:09:17.013 "subsystem": "bdev", 00:09:17.013 "config": [ 00:09:17.013 { 00:09:17.013 "params": { 00:09:17.013 "block_size": 512, 00:09:17.013 "num_blocks": 1048576, 00:09:17.013 "name": "malloc0" 00:09:17.013 }, 00:09:17.013 "method": "bdev_malloc_create" 00:09:17.013 }, 00:09:17.013 { 00:09:17.013 "params": { 00:09:17.013 "filename": "/dev/zram1", 00:09:17.013 "name": "uring0" 00:09:17.013 }, 00:09:17.013 "method": "bdev_uring_create" 00:09:17.013 }, 00:09:17.013 { 00:09:17.013 "method": "bdev_wait_for_examine" 00:09:17.013 } 00:09:17.013 ] 00:09:17.013 } 00:09:17.013 ] 00:09:17.013 } 00:09:17.013 [2024-12-05 19:29:10.437056] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:17.272 [2024-12-05 19:29:10.495074] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:17.272 [2024-12-05 19:29:10.548396] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:18.656  [2024-12-05T19:29:13.058Z] Copying: 217/512 [MB] (217 MBps) [2024-12-05T19:29:13.318Z] Copying: 437/512 [MB] (220 MBps) [2024-12-05T19:29:13.578Z] Copying: 512/512 [MB] (average 219 MBps) 00:09:20.141 00:09:20.141 19:29:13 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:09:20.141 19:29:13 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # gen_conf 00:09:20.141 19:29:13 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:09:20.141 19:29:13 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:09:20.141 [2024-12-05 19:29:13.530105] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:09:20.141 [2024-12-05 19:29:13.530483] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61162 ] 00:09:20.141 { 00:09:20.141 "subsystems": [ 00:09:20.141 { 00:09:20.141 "subsystem": "bdev", 00:09:20.141 "config": [ 00:09:20.141 { 00:09:20.141 "params": { 00:09:20.141 "block_size": 512, 00:09:20.141 "num_blocks": 1048576, 00:09:20.141 "name": "malloc0" 00:09:20.141 }, 00:09:20.141 "method": "bdev_malloc_create" 00:09:20.141 }, 00:09:20.141 { 00:09:20.141 "params": { 00:09:20.141 "filename": "/dev/zram1", 00:09:20.141 "name": "uring0" 00:09:20.141 }, 00:09:20.141 "method": "bdev_uring_create" 00:09:20.141 }, 00:09:20.141 { 00:09:20.141 "method": "bdev_wait_for_examine" 00:09:20.141 } 00:09:20.141 ] 00:09:20.141 } 00:09:20.141 ] 00:09:20.141 } 00:09:20.400 [2024-12-05 19:29:13.679183] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:20.400 [2024-12-05 19:29:13.730167] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:20.400 [2024-12-05 19:29:13.784857] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:21.777  [2024-12-05T19:29:16.150Z] Copying: 180/512 [MB] (180 MBps) [2024-12-05T19:29:17.086Z] Copying: 344/512 [MB] (164 MBps) [2024-12-05T19:29:17.655Z] Copying: 512/512 [MB] (average 173 MBps) 00:09:24.218 00:09:24.218 19:29:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:09:24.218 19:29:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@66 -- # [[ mlrs95vteflj3uyakk5rcv50lkkoppmk2kk0l872eqlcuem4ixtmlvnfqs3gf7wpm315amdaeff0nkjuygvcstx7jc5m9ohgtehwyt5nyxl4bkv9n8ri898vke9p439zqyx60lp6l2s6ltnxremndhgjad6bxdxxd013zijj7jzliyreetcchywp35bb18v7jflh9qrjed41g7qsd1tz0gpe6jlh129h7xce7qcw8kmfldgryl31m4yibdyttpzofv788wuegex53f8v9yril14thhwltdjxvnz7l3i2fdxxfggxlytwgm5db79sdaakxm3adcng10waeknc7w6jdmgk153i1h47dtj7s807vpf9u1u11hrdnxtlo8660hulqkb7gpw0em56ifhmow3rnlayhm5atfb9d8ow9juo5sk9j11o35rgk47mhhr0dszmhi9ore0fcx41fzgh1kphsfclb7ckuauxewdyqpq4zd9titafzl0wuudssieg7ylv8qjez23jvdld8qw391ar27s5uqk3o234z86zxwgsln7i6xw3bdl5t3b6sgfc8elb88fnzpz8jdrmsrw2kaaxz027928vgmtkyhlce1ew5shfyuxss4aior65bvkc0e0cux3npz76m73pda5oz5e0bxbewc3760nj8bd68gfjbxh5vrgp72ijux0dqn8jpc4ar4yb8mtww18sqe898hkocjitnvc1m389i07g7x4aiwj3xaumextllkzleizy7jlr4udsq8xkm5u714q2u59i8qkfmqgggaa8zc4ho9czylv4qpn409znu6rij1g9ir9s42n4wp3mna3ilieafr91losjb8clj6pvhjof48a99l9h50tfyhy04sll54qgeffebz8r4qcjwj79flt2eoev0oyeqhiypsoxvoz0kog3jd4u5smfroome1eamoq6yduvjolc636hq5hqyywd8gq038qp05nwiryrbei0it6ctin8zeteir9qdmi2s80eyhij == \m\l\r\s\9\5\v\t\e\f\l\j\3\u\y\a\k\k\5\r\c\v\5\0\l\k\k\o\p\p\m\k\2\k\k\0\l\8\7\2\e\q\l\c\u\e\m\4\i\x\t\m\l\v\n\f\q\s\3\g\f\7\w\p\m\3\1\5\a\m\d\a\e\f\f\0\n\k\j\u\y\g\v\c\s\t\x\7\j\c\5\m\9\o\h\g\t\e\h\w\y\t\5\n\y\x\l\4\b\k\v\9\n\8\r\i\8\9\8\v\k\e\9\p\4\3\9\z\q\y\x\6\0\l\p\6\l\2\s\6\l\t\n\x\r\e\m\n\d\h\g\j\a\d\6\b\x\d\x\x\d\0\1\3\z\i\j\j\7\j\z\l\i\y\r\e\e\t\c\c\h\y\w\p\3\5\b\b\1\8\v\7\j\f\l\h\9\q\r\j\e\d\4\1\g\7\q\s\d\1\t\z\0\g\p\e\6\j\l\h\1\2\9\h\7\x\c\e\7\q\c\w\8\k\m\f\l\d\g\r\y\l\3\1\m\4\y\i\b\d\y\t\t\p\z\o\f\v\7\8\8\w\u\e\g\e\x\5\3\f\8\v\9\y\r\i\l\1\4\t\h\h\w\l\t\d\j\x\v\n\z\7\l\3\i\2\f\d\x\x\f\g\g\x\l\y\t\w\g\m\5\d\b\7\9\s\d\a\a\k\x\m\3\a\d\c\n\g\1\0\w\a\e\k\n\c\7\w\6\j\d\m\g\k\1\5\3\i\1\h\4\7\d\t\j\7\s\8\0\7\v\p\f\9\u\1\u\1\1\h\r\d\n\x\t\l\o\8\6\6\0\h\u\l\q\k\b\7\g\p\w\0\e\m\5\6\i\f\h\m\o\w\3\r\n\l\a\y\h\m\5\a\t\f\b\9\d\8\o\w\9\j\u\o\5\s\k\9\j\1\1\o\3\5\r\g\k\4\7\m\h\h\r\0\d\s\z\m\h\i\9\o\r\e\0\f\c\x\4\1\f\z\g\h\1\k\p\h\s\f\c\l\b\7\c\k\u\a\u\x\e\w\d\y\q\p\q\4\z\d\9\t\i\t\a\f\z\l\0\w\u\u\d\s\s\i\e\g\7\y\l\v\8\q\j\e\z\2\3\j\v\d\l\d\8\q\w\3\9\1\a\r\2\7\s\5\u\q\k\3\o\2\3\4\z\8\6\z\x\w\g\s\l\n\7\i\6\x\w\3\b\d\l\5\t\3\b\6\s\g\f\c\8\e\l\b\8\8\f\n\z\p\z\8\j\d\r\m\s\r\w\2\k\a\a\x\z\0\2\7\9\2\8\v\g\m\t\k\y\h\l\c\e\1\e\w\5\s\h\f\y\u\x\s\s\4\a\i\o\r\6\5\b\v\k\c\0\e\0\c\u\x\3\n\p\z\7\6\m\7\3\p\d\a\5\o\z\5\e\0\b\x\b\e\w\c\3\7\6\0\n\j\8\b\d\6\8\g\f\j\b\x\h\5\v\r\g\p\7\2\i\j\u\x\0\d\q\n\8\j\p\c\4\a\r\4\y\b\8\m\t\w\w\1\8\s\q\e\8\9\8\h\k\o\c\j\i\t\n\v\c\1\m\3\8\9\i\0\7\g\7\x\4\a\i\w\j\3\x\a\u\m\e\x\t\l\l\k\z\l\e\i\z\y\7\j\l\r\4\u\d\s\q\8\x\k\m\5\u\7\1\4\q\2\u\5\9\i\8\q\k\f\m\q\g\g\g\a\a\8\z\c\4\h\o\9\c\z\y\l\v\4\q\p\n\4\0\9\z\n\u\6\r\i\j\1\g\9\i\r\9\s\4\2\n\4\w\p\3\m\n\a\3\i\l\i\e\a\f\r\9\1\l\o\s\j\b\8\c\l\j\6\p\v\h\j\o\f\4\8\a\9\9\l\9\h\5\0\t\f\y\h\y\0\4\s\l\l\5\4\q\g\e\f\f\e\b\z\8\r\4\q\c\j\w\j\7\9\f\l\t\2\e\o\e\v\0\o\y\e\q\h\i\y\p\s\o\x\v\o\z\0\k\o\g\3\j\d\4\u\5\s\m\f\r\o\o\m\e\1\e\a\m\o\q\6\y\d\u\v\j\o\l\c\6\3\6\h\q\5\h\q\y\y\w\d\8\g\q\0\3\8\q\p\0\5\n\w\i\r\y\r\b\e\i\0\i\t\6\c\t\i\n\8\z\e\t\e\i\r\9\q\d\m\i\2\s\8\0\e\y\h\i\j ]] 00:09:24.218 19:29:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:09:24.218 19:29:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@69 -- # [[ mlrs95vteflj3uyakk5rcv50lkkoppmk2kk0l872eqlcuem4ixtmlvnfqs3gf7wpm315amdaeff0nkjuygvcstx7jc5m9ohgtehwyt5nyxl4bkv9n8ri898vke9p439zqyx60lp6l2s6ltnxremndhgjad6bxdxxd013zijj7jzliyreetcchywp35bb18v7jflh9qrjed41g7qsd1tz0gpe6jlh129h7xce7qcw8kmfldgryl31m4yibdyttpzofv788wuegex53f8v9yril14thhwltdjxvnz7l3i2fdxxfggxlytwgm5db79sdaakxm3adcng10waeknc7w6jdmgk153i1h47dtj7s807vpf9u1u11hrdnxtlo8660hulqkb7gpw0em56ifhmow3rnlayhm5atfb9d8ow9juo5sk9j11o35rgk47mhhr0dszmhi9ore0fcx41fzgh1kphsfclb7ckuauxewdyqpq4zd9titafzl0wuudssieg7ylv8qjez23jvdld8qw391ar27s5uqk3o234z86zxwgsln7i6xw3bdl5t3b6sgfc8elb88fnzpz8jdrmsrw2kaaxz027928vgmtkyhlce1ew5shfyuxss4aior65bvkc0e0cux3npz76m73pda5oz5e0bxbewc3760nj8bd68gfjbxh5vrgp72ijux0dqn8jpc4ar4yb8mtww18sqe898hkocjitnvc1m389i07g7x4aiwj3xaumextllkzleizy7jlr4udsq8xkm5u714q2u59i8qkfmqgggaa8zc4ho9czylv4qpn409znu6rij1g9ir9s42n4wp3mna3ilieafr91losjb8clj6pvhjof48a99l9h50tfyhy04sll54qgeffebz8r4qcjwj79flt2eoev0oyeqhiypsoxvoz0kog3jd4u5smfroome1eamoq6yduvjolc636hq5hqyywd8gq038qp05nwiryrbei0it6ctin8zeteir9qdmi2s80eyhij == \m\l\r\s\9\5\v\t\e\f\l\j\3\u\y\a\k\k\5\r\c\v\5\0\l\k\k\o\p\p\m\k\2\k\k\0\l\8\7\2\e\q\l\c\u\e\m\4\i\x\t\m\l\v\n\f\q\s\3\g\f\7\w\p\m\3\1\5\a\m\d\a\e\f\f\0\n\k\j\u\y\g\v\c\s\t\x\7\j\c\5\m\9\o\h\g\t\e\h\w\y\t\5\n\y\x\l\4\b\k\v\9\n\8\r\i\8\9\8\v\k\e\9\p\4\3\9\z\q\y\x\6\0\l\p\6\l\2\s\6\l\t\n\x\r\e\m\n\d\h\g\j\a\d\6\b\x\d\x\x\d\0\1\3\z\i\j\j\7\j\z\l\i\y\r\e\e\t\c\c\h\y\w\p\3\5\b\b\1\8\v\7\j\f\l\h\9\q\r\j\e\d\4\1\g\7\q\s\d\1\t\z\0\g\p\e\6\j\l\h\1\2\9\h\7\x\c\e\7\q\c\w\8\k\m\f\l\d\g\r\y\l\3\1\m\4\y\i\b\d\y\t\t\p\z\o\f\v\7\8\8\w\u\e\g\e\x\5\3\f\8\v\9\y\r\i\l\1\4\t\h\h\w\l\t\d\j\x\v\n\z\7\l\3\i\2\f\d\x\x\f\g\g\x\l\y\t\w\g\m\5\d\b\7\9\s\d\a\a\k\x\m\3\a\d\c\n\g\1\0\w\a\e\k\n\c\7\w\6\j\d\m\g\k\1\5\3\i\1\h\4\7\d\t\j\7\s\8\0\7\v\p\f\9\u\1\u\1\1\h\r\d\n\x\t\l\o\8\6\6\0\h\u\l\q\k\b\7\g\p\w\0\e\m\5\6\i\f\h\m\o\w\3\r\n\l\a\y\h\m\5\a\t\f\b\9\d\8\o\w\9\j\u\o\5\s\k\9\j\1\1\o\3\5\r\g\k\4\7\m\h\h\r\0\d\s\z\m\h\i\9\o\r\e\0\f\c\x\4\1\f\z\g\h\1\k\p\h\s\f\c\l\b\7\c\k\u\a\u\x\e\w\d\y\q\p\q\4\z\d\9\t\i\t\a\f\z\l\0\w\u\u\d\s\s\i\e\g\7\y\l\v\8\q\j\e\z\2\3\j\v\d\l\d\8\q\w\3\9\1\a\r\2\7\s\5\u\q\k\3\o\2\3\4\z\8\6\z\x\w\g\s\l\n\7\i\6\x\w\3\b\d\l\5\t\3\b\6\s\g\f\c\8\e\l\b\8\8\f\n\z\p\z\8\j\d\r\m\s\r\w\2\k\a\a\x\z\0\2\7\9\2\8\v\g\m\t\k\y\h\l\c\e\1\e\w\5\s\h\f\y\u\x\s\s\4\a\i\o\r\6\5\b\v\k\c\0\e\0\c\u\x\3\n\p\z\7\6\m\7\3\p\d\a\5\o\z\5\e\0\b\x\b\e\w\c\3\7\6\0\n\j\8\b\d\6\8\g\f\j\b\x\h\5\v\r\g\p\7\2\i\j\u\x\0\d\q\n\8\j\p\c\4\a\r\4\y\b\8\m\t\w\w\1\8\s\q\e\8\9\8\h\k\o\c\j\i\t\n\v\c\1\m\3\8\9\i\0\7\g\7\x\4\a\i\w\j\3\x\a\u\m\e\x\t\l\l\k\z\l\e\i\z\y\7\j\l\r\4\u\d\s\q\8\x\k\m\5\u\7\1\4\q\2\u\5\9\i\8\q\k\f\m\q\g\g\g\a\a\8\z\c\4\h\o\9\c\z\y\l\v\4\q\p\n\4\0\9\z\n\u\6\r\i\j\1\g\9\i\r\9\s\4\2\n\4\w\p\3\m\n\a\3\i\l\i\e\a\f\r\9\1\l\o\s\j\b\8\c\l\j\6\p\v\h\j\o\f\4\8\a\9\9\l\9\h\5\0\t\f\y\h\y\0\4\s\l\l\5\4\q\g\e\f\f\e\b\z\8\r\4\q\c\j\w\j\7\9\f\l\t\2\e\o\e\v\0\o\y\e\q\h\i\y\p\s\o\x\v\o\z\0\k\o\g\3\j\d\4\u\5\s\m\f\r\o\o\m\e\1\e\a\m\o\q\6\y\d\u\v\j\o\l\c\6\3\6\h\q\5\h\q\y\y\w\d\8\g\q\0\3\8\q\p\0\5\n\w\i\r\y\r\b\e\i\0\i\t\6\c\t\i\n\8\z\e\t\e\i\r\9\q\d\m\i\2\s\8\0\e\y\h\i\j ]] 00:09:24.218 19:29:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:09:24.477 19:29:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # gen_conf 00:09:24.477 19:29:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:09:24.477 19:29:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:09:24.477 19:29:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:09:24.477 { 00:09:24.477 "subsystems": [ 00:09:24.477 { 00:09:24.477 "subsystem": "bdev", 00:09:24.477 "config": [ 00:09:24.477 { 00:09:24.477 "params": { 00:09:24.477 "block_size": 512, 00:09:24.477 "num_blocks": 1048576, 00:09:24.477 "name": "malloc0" 00:09:24.477 }, 00:09:24.477 "method": "bdev_malloc_create" 00:09:24.477 }, 00:09:24.477 { 00:09:24.477 "params": { 00:09:24.477 "filename": "/dev/zram1", 00:09:24.477 "name": "uring0" 00:09:24.477 }, 00:09:24.477 "method": "bdev_uring_create" 00:09:24.477 }, 00:09:24.477 { 00:09:24.477 "method": "bdev_wait_for_examine" 00:09:24.477 } 00:09:24.477 ] 00:09:24.477 } 00:09:24.477 ] 00:09:24.477 } 00:09:24.477 [2024-12-05 19:29:17.800281] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:09:24.477 [2024-12-05 19:29:17.800394] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61231 ] 00:09:24.736 [2024-12-05 19:29:17.948820] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:24.736 [2024-12-05 19:29:18.007129] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:24.736 [2024-12-05 19:29:18.063382] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:26.115  [2024-12-05T19:29:20.487Z] Copying: 153/512 [MB] (153 MBps) [2024-12-05T19:29:21.422Z] Copying: 319/512 [MB] (166 MBps) [2024-12-05T19:29:21.680Z] Copying: 478/512 [MB] (158 MBps) [2024-12-05T19:29:21.939Z] Copying: 512/512 [MB] (average 158 MBps) 00:09:28.502 00:09:28.502 19:29:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:09:28.502 19:29:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:09:28.502 19:29:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:09:28.502 19:29:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:09:28.502 19:29:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:09:28.502 19:29:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # gen_conf 00:09:28.502 19:29:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:09:28.502 19:29:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:09:28.761 [2024-12-05 19:29:21.942808] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:09:28.761 [2024-12-05 19:29:21.943108] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61293 ] 00:09:28.761 { 00:09:28.761 "subsystems": [ 00:09:28.761 { 00:09:28.761 "subsystem": "bdev", 00:09:28.761 "config": [ 00:09:28.761 { 00:09:28.761 "params": { 00:09:28.761 "block_size": 512, 00:09:28.761 "num_blocks": 1048576, 00:09:28.761 "name": "malloc0" 00:09:28.761 }, 00:09:28.761 "method": "bdev_malloc_create" 00:09:28.761 }, 00:09:28.761 { 00:09:28.761 "params": { 00:09:28.761 "filename": "/dev/zram1", 00:09:28.761 "name": "uring0" 00:09:28.761 }, 00:09:28.761 "method": "bdev_uring_create" 00:09:28.761 }, 00:09:28.761 { 00:09:28.761 "params": { 00:09:28.761 "name": "uring0" 00:09:28.761 }, 00:09:28.761 "method": "bdev_uring_delete" 00:09:28.761 }, 00:09:28.761 { 00:09:28.761 "method": "bdev_wait_for_examine" 00:09:28.761 } 00:09:28.761 ] 00:09:28.761 } 00:09:28.761 ] 00:09:28.761 } 00:09:28.761 [2024-12-05 19:29:22.085885] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:28.761 [2024-12-05 19:29:22.144305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:29.020 [2024-12-05 19:29:22.202094] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:29.020  [2024-12-05T19:29:23.024Z] Copying: 0/0 [B] (average 0 Bps) 00:09:29.587 00:09:29.587 19:29:22 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # : 00:09:29.587 19:29:22 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:09:29.587 19:29:22 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@652 -- # local es=0 00:09:29.587 19:29:22 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:09:29.587 19:29:22 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # gen_conf 00:09:29.587 19:29:22 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:29.587 19:29:22 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:09:29.587 19:29:22 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:09:29.587 19:29:22 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:29.587 19:29:22 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:29.587 19:29:22 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:29.587 19:29:22 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:29.587 19:29:22 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:29.587 19:29:22 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:29.587 19:29:22 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:29.587 19:29:22 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:09:29.587 [2024-12-05 19:29:22.868588] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:09:29.587 [2024-12-05 19:29:22.868701] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61318 ] 00:09:29.587 { 00:09:29.587 "subsystems": [ 00:09:29.587 { 00:09:29.587 "subsystem": "bdev", 00:09:29.587 "config": [ 00:09:29.587 { 00:09:29.587 "params": { 00:09:29.587 "block_size": 512, 00:09:29.587 "num_blocks": 1048576, 00:09:29.587 "name": "malloc0" 00:09:29.587 }, 00:09:29.587 "method": "bdev_malloc_create" 00:09:29.587 }, 00:09:29.587 { 00:09:29.587 "params": { 00:09:29.587 "filename": "/dev/zram1", 00:09:29.587 "name": "uring0" 00:09:29.587 }, 00:09:29.587 "method": "bdev_uring_create" 00:09:29.587 }, 00:09:29.587 { 00:09:29.587 "params": { 00:09:29.587 "name": "uring0" 00:09:29.587 }, 00:09:29.587 "method": "bdev_uring_delete" 00:09:29.587 }, 00:09:29.587 { 00:09:29.587 "method": "bdev_wait_for_examine" 00:09:29.587 } 00:09:29.587 ] 00:09:29.587 } 00:09:29.587 ] 00:09:29.587 } 00:09:29.587 [2024-12-05 19:29:23.014691] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:29.846 [2024-12-05 19:29:23.073277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:29.846 [2024-12-05 19:29:23.127855] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:30.103 [2024-12-05 19:29:23.336726] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:09:30.103 [2024-12-05 19:29:23.336804] spdk_dd.c: 931:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:09:30.103 [2024-12-05 19:29:23.336833] spdk_dd.c:1088:dd_run: *ERROR*: uring0: No such device 00:09:30.103 [2024-12-05 19:29:23.336844] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:30.359 [2024-12-05 19:29:23.671882] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:09:30.359 19:29:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@655 -- # es=237 00:09:30.359 19:29:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:30.359 19:29:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@664 -- # es=109 00:09:30.359 19:29:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@665 -- # case "$es" in 00:09:30.359 19:29:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@672 -- # es=1 00:09:30.359 19:29:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:30.359 19:29:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@99 -- # remove_zram_dev 1 00:09:30.360 19:29:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@168 -- # local id=1 00:09:30.360 19:29:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@170 -- # [[ -e /sys/block/zram1 ]] 00:09:30.360 19:29:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@172 -- # echo 1 00:09:30.360 19:29:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@173 -- # echo 1 00:09:30.360 19:29:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:09:30.617 ************************************ 00:09:30.617 END TEST dd_uring_copy 00:09:30.617 ************************************ 00:09:30.617 00:09:30.617 real 0m15.197s 00:09:30.617 user 0m10.252s 00:09:30.617 sys 0m12.893s 00:09:30.617 19:29:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:30.617 19:29:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:09:30.617 00:09:30.617 real 0m15.440s 00:09:30.617 user 0m10.391s 00:09:30.617 sys 0m12.999s 00:09:30.617 ************************************ 00:09:30.617 END TEST spdk_dd_uring 00:09:30.617 ************************************ 00:09:30.617 19:29:24 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:30.617 19:29:24 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:09:30.874 19:29:24 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:09:30.874 19:29:24 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:30.874 19:29:24 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:30.874 19:29:24 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:09:30.874 ************************************ 00:09:30.874 START TEST spdk_dd_sparse 00:09:30.874 ************************************ 00:09:30.874 19:29:24 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:09:30.874 * Looking for test storage... 00:09:30.874 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:09:30.874 19:29:24 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:30.874 19:29:24 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1711 -- # lcov --version 00:09:30.874 19:29:24 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:30.874 19:29:24 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:30.874 19:29:24 spdk_dd.spdk_dd_sparse -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:30.874 19:29:24 spdk_dd.spdk_dd_sparse -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:30.874 19:29:24 spdk_dd.spdk_dd_sparse -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:30.874 19:29:24 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # IFS=.-: 00:09:30.874 19:29:24 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # read -ra ver1 00:09:30.874 19:29:24 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # IFS=.-: 00:09:30.874 19:29:24 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # read -ra ver2 00:09:30.874 19:29:24 spdk_dd.spdk_dd_sparse -- scripts/common.sh@338 -- # local 'op=<' 00:09:30.874 19:29:24 spdk_dd.spdk_dd_sparse -- scripts/common.sh@340 -- # ver1_l=2 00:09:30.874 19:29:24 spdk_dd.spdk_dd_sparse -- scripts/common.sh@341 -- # ver2_l=1 00:09:30.874 19:29:24 spdk_dd.spdk_dd_sparse -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:30.874 19:29:24 spdk_dd.spdk_dd_sparse -- scripts/common.sh@344 -- # case "$op" in 00:09:30.874 19:29:24 spdk_dd.spdk_dd_sparse -- scripts/common.sh@345 -- # : 1 00:09:30.874 19:29:24 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:30.874 19:29:24 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:30.874 19:29:24 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # decimal 1 00:09:30.874 19:29:24 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=1 00:09:30.874 19:29:24 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:30.874 19:29:24 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 1 00:09:30.874 19:29:24 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # ver1[v]=1 00:09:30.874 19:29:24 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # decimal 2 00:09:30.874 19:29:24 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=2 00:09:30.874 19:29:24 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:30.874 19:29:24 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 2 00:09:30.874 19:29:24 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # ver2[v]=2 00:09:30.874 19:29:24 spdk_dd.spdk_dd_sparse -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:30.874 19:29:24 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:30.874 19:29:24 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # return 0 00:09:30.874 19:29:24 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:30.874 19:29:24 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:30.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.874 --rc genhtml_branch_coverage=1 00:09:30.874 --rc genhtml_function_coverage=1 00:09:30.874 --rc genhtml_legend=1 00:09:30.874 --rc geninfo_all_blocks=1 00:09:30.874 --rc geninfo_unexecuted_blocks=1 00:09:30.874 00:09:30.874 ' 00:09:30.875 19:29:24 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:30.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.875 --rc genhtml_branch_coverage=1 00:09:30.875 --rc genhtml_function_coverage=1 00:09:30.875 --rc genhtml_legend=1 00:09:30.875 --rc geninfo_all_blocks=1 00:09:30.875 --rc geninfo_unexecuted_blocks=1 00:09:30.875 00:09:30.875 ' 00:09:30.875 19:29:24 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:30.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.875 --rc genhtml_branch_coverage=1 00:09:30.875 --rc genhtml_function_coverage=1 00:09:30.875 --rc genhtml_legend=1 00:09:30.875 --rc geninfo_all_blocks=1 00:09:30.875 --rc geninfo_unexecuted_blocks=1 00:09:30.875 00:09:30.875 ' 00:09:30.875 19:29:24 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:30.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.875 --rc genhtml_branch_coverage=1 00:09:30.875 --rc genhtml_function_coverage=1 00:09:30.875 --rc genhtml_legend=1 00:09:30.875 --rc geninfo_all_blocks=1 00:09:30.875 --rc geninfo_unexecuted_blocks=1 00:09:30.875 00:09:30.875 ' 00:09:30.875 19:29:24 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:30.875 19:29:24 spdk_dd.spdk_dd_sparse -- scripts/common.sh@15 -- # shopt -s extglob 00:09:30.875 19:29:24 spdk_dd.spdk_dd_sparse -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:30.875 19:29:24 spdk_dd.spdk_dd_sparse -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:30.875 19:29:24 spdk_dd.spdk_dd_sparse -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:30.875 19:29:24 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.875 19:29:24 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.875 19:29:24 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.875 19:29:24 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:09:30.875 19:29:24 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.875 19:29:24 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:09:30.875 19:29:24 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:09:30.875 19:29:24 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:09:30.875 19:29:24 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:09:30.875 19:29:24 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:09:30.875 19:29:24 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:09:30.875 19:29:24 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:09:30.875 19:29:24 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:09:30.875 19:29:24 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:09:30.875 19:29:24 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:09:30.875 19:29:24 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:09:30.875 1+0 records in 00:09:30.875 1+0 records out 00:09:30.875 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00903019 s, 464 MB/s 00:09:30.875 19:29:24 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:09:31.138 1+0 records in 00:09:31.138 1+0 records out 00:09:31.138 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00766571 s, 547 MB/s 00:09:31.138 19:29:24 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:09:31.138 1+0 records in 00:09:31.138 1+0 records out 00:09:31.138 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.011563 s, 363 MB/s 00:09:31.138 19:29:24 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:09:31.138 19:29:24 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:31.138 19:29:24 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:31.138 19:29:24 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:09:31.138 ************************************ 00:09:31.138 START TEST dd_sparse_file_to_file 00:09:31.138 ************************************ 00:09:31.138 19:29:24 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1129 -- # file_to_file 00:09:31.138 19:29:24 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:09:31.138 19:29:24 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:09:31.138 19:29:24 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:09:31.138 19:29:24 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:09:31.139 19:29:24 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:09:31.139 19:29:24 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:09:31.139 19:29:24 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:09:31.139 19:29:24 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:09:31.139 19:29:24 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:09:31.139 19:29:24 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:09:31.139 [2024-12-05 19:29:24.405476] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:09:31.139 [2024-12-05 19:29:24.405774] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61416 ] 00:09:31.139 { 00:09:31.139 "subsystems": [ 00:09:31.139 { 00:09:31.139 "subsystem": "bdev", 00:09:31.139 "config": [ 00:09:31.139 { 00:09:31.139 "params": { 00:09:31.139 "block_size": 4096, 00:09:31.139 "filename": "dd_sparse_aio_disk", 00:09:31.139 "name": "dd_aio" 00:09:31.139 }, 00:09:31.139 "method": "bdev_aio_create" 00:09:31.139 }, 00:09:31.139 { 00:09:31.139 "params": { 00:09:31.139 "lvs_name": "dd_lvstore", 00:09:31.139 "bdev_name": "dd_aio" 00:09:31.139 }, 00:09:31.139 "method": "bdev_lvol_create_lvstore" 00:09:31.139 }, 00:09:31.139 { 00:09:31.139 "method": "bdev_wait_for_examine" 00:09:31.139 } 00:09:31.139 ] 00:09:31.139 } 00:09:31.139 ] 00:09:31.139 } 00:09:31.139 [2024-12-05 19:29:24.554858] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:31.418 [2024-12-05 19:29:24.608506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:31.418 [2024-12-05 19:29:24.667528] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:31.418  [2024-12-05T19:29:25.151Z] Copying: 12/36 [MB] (average 857 MBps) 00:09:31.714 00:09:31.714 19:29:24 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:09:31.714 19:29:24 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:09:31.714 19:29:24 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:09:31.714 19:29:24 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:09:31.714 19:29:24 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:09:31.714 19:29:24 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:09:31.714 19:29:25 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:09:31.714 19:29:25 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:09:31.714 ************************************ 00:09:31.715 END TEST dd_sparse_file_to_file 00:09:31.715 ************************************ 00:09:31.715 19:29:25 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:09:31.715 19:29:25 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:09:31.715 00:09:31.715 real 0m0.669s 00:09:31.715 user 0m0.401s 00:09:31.715 sys 0m0.395s 00:09:31.715 19:29:25 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:31.715 19:29:25 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:09:31.715 19:29:25 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:09:31.715 19:29:25 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:31.715 19:29:25 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:31.715 19:29:25 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:09:31.715 ************************************ 00:09:31.715 START TEST dd_sparse_file_to_bdev 00:09:31.715 ************************************ 00:09:31.715 19:29:25 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1129 -- # file_to_bdev 00:09:31.715 19:29:25 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:09:31.715 19:29:25 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:09:31.715 19:29:25 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 00:09:31.715 19:29:25 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:09:31.715 19:29:25 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:09:31.715 19:29:25 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:09:31.715 19:29:25 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:09:31.715 19:29:25 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:09:31.715 [2024-12-05 19:29:25.120354] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:09:31.715 [2024-12-05 19:29:25.120619] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61464 ] 00:09:31.715 { 00:09:31.715 "subsystems": [ 00:09:31.715 { 00:09:31.715 "subsystem": "bdev", 00:09:31.715 "config": [ 00:09:31.715 { 00:09:31.715 "params": { 00:09:31.715 "block_size": 4096, 00:09:31.715 "filename": "dd_sparse_aio_disk", 00:09:31.715 "name": "dd_aio" 00:09:31.715 }, 00:09:31.715 "method": "bdev_aio_create" 00:09:31.715 }, 00:09:31.715 { 00:09:31.715 "params": { 00:09:31.715 "lvs_name": "dd_lvstore", 00:09:31.715 "lvol_name": "dd_lvol", 00:09:31.715 "size_in_mib": 36, 00:09:31.715 "thin_provision": true 00:09:31.715 }, 00:09:31.715 "method": "bdev_lvol_create" 00:09:31.715 }, 00:09:31.715 { 00:09:31.715 "method": "bdev_wait_for_examine" 00:09:31.715 } 00:09:31.715 ] 00:09:31.715 } 00:09:31.715 ] 00:09:31.715 } 00:09:31.972 [2024-12-05 19:29:25.270917] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:31.972 [2024-12-05 19:29:25.326986] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:31.972 [2024-12-05 19:29:25.388113] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:32.230  [2024-12-05T19:29:25.925Z] Copying: 12/36 [MB] (average 521 MBps) 00:09:32.488 00:09:32.488 00:09:32.488 real 0m0.647s 00:09:32.488 user 0m0.396s 00:09:32.488 sys 0m0.373s 00:09:32.488 19:29:25 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:32.488 ************************************ 00:09:32.488 19:29:25 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:09:32.488 END TEST dd_sparse_file_to_bdev 00:09:32.488 ************************************ 00:09:32.488 19:29:25 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:09:32.488 19:29:25 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:32.488 19:29:25 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:32.488 19:29:25 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:09:32.488 ************************************ 00:09:32.488 START TEST dd_sparse_bdev_to_file 00:09:32.488 ************************************ 00:09:32.488 19:29:25 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1129 -- # bdev_to_file 00:09:32.488 19:29:25 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:09:32.488 19:29:25 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:09:32.488 19:29:25 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:09:32.488 19:29:25 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:09:32.488 19:29:25 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:09:32.488 19:29:25 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:09:32.488 19:29:25 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:09:32.488 19:29:25 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:09:32.488 { 00:09:32.488 "subsystems": [ 00:09:32.488 { 00:09:32.488 "subsystem": "bdev", 00:09:32.488 "config": [ 00:09:32.488 { 00:09:32.488 "params": { 00:09:32.488 "block_size": 4096, 00:09:32.488 "filename": "dd_sparse_aio_disk", 00:09:32.488 "name": "dd_aio" 00:09:32.488 }, 00:09:32.488 "method": "bdev_aio_create" 00:09:32.488 }, 00:09:32.488 { 00:09:32.488 "method": "bdev_wait_for_examine" 00:09:32.488 } 00:09:32.488 ] 00:09:32.488 } 00:09:32.488 ] 00:09:32.488 } 00:09:32.488 [2024-12-05 19:29:25.824397] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:09:32.488 [2024-12-05 19:29:25.824492] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61497 ] 00:09:32.756 [2024-12-05 19:29:25.972835] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:32.756 [2024-12-05 19:29:26.037040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:32.756 [2024-12-05 19:29:26.090931] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:32.756  [2024-12-05T19:29:26.451Z] Copying: 12/36 [MB] (average 923 MBps) 00:09:33.014 00:09:33.014 19:29:26 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:09:33.014 19:29:26 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:09:33.014 19:29:26 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:09:33.014 19:29:26 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:09:33.014 19:29:26 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:09:33.014 19:29:26 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:09:33.014 19:29:26 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:09:33.014 19:29:26 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:09:33.014 ************************************ 00:09:33.014 END TEST dd_sparse_bdev_to_file 00:09:33.014 ************************************ 00:09:33.014 19:29:26 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:09:33.014 19:29:26 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:09:33.014 00:09:33.014 real 0m0.657s 00:09:33.014 user 0m0.399s 00:09:33.014 sys 0m0.368s 00:09:33.014 19:29:26 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:33.014 19:29:26 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:09:33.270 19:29:26 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:09:33.270 19:29:26 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:09:33.270 19:29:26 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:09:33.270 19:29:26 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:09:33.270 19:29:26 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:09:33.270 ************************************ 00:09:33.270 END TEST spdk_dd_sparse 00:09:33.270 ************************************ 00:09:33.270 00:09:33.270 real 0m2.407s 00:09:33.270 user 0m1.389s 00:09:33.270 sys 0m1.367s 00:09:33.270 19:29:26 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:33.270 19:29:26 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:09:33.270 19:29:26 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:09:33.270 19:29:26 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:33.270 19:29:26 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:33.270 19:29:26 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:09:33.270 ************************************ 00:09:33.270 START TEST spdk_dd_negative 00:09:33.270 ************************************ 00:09:33.270 19:29:26 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:09:33.270 * Looking for test storage... 00:09:33.270 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:09:33.270 19:29:26 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:33.271 19:29:26 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1711 -- # lcov --version 00:09:33.271 19:29:26 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:33.529 19:29:26 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:33.529 19:29:26 spdk_dd.spdk_dd_negative -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:33.529 19:29:26 spdk_dd.spdk_dd_negative -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:33.529 19:29:26 spdk_dd.spdk_dd_negative -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:33.529 19:29:26 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # IFS=.-: 00:09:33.529 19:29:26 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # read -ra ver1 00:09:33.529 19:29:26 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # IFS=.-: 00:09:33.529 19:29:26 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # read -ra ver2 00:09:33.529 19:29:26 spdk_dd.spdk_dd_negative -- scripts/common.sh@338 -- # local 'op=<' 00:09:33.529 19:29:26 spdk_dd.spdk_dd_negative -- scripts/common.sh@340 -- # ver1_l=2 00:09:33.529 19:29:26 spdk_dd.spdk_dd_negative -- scripts/common.sh@341 -- # ver2_l=1 00:09:33.529 19:29:26 spdk_dd.spdk_dd_negative -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:33.529 19:29:26 spdk_dd.spdk_dd_negative -- scripts/common.sh@344 -- # case "$op" in 00:09:33.529 19:29:26 spdk_dd.spdk_dd_negative -- scripts/common.sh@345 -- # : 1 00:09:33.529 19:29:26 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:33.529 19:29:26 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:33.529 19:29:26 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # decimal 1 00:09:33.529 19:29:26 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=1 00:09:33.529 19:29:26 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:33.529 19:29:26 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 1 00:09:33.529 19:29:26 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # ver1[v]=1 00:09:33.529 19:29:26 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # decimal 2 00:09:33.529 19:29:26 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=2 00:09:33.529 19:29:26 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:33.529 19:29:26 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 2 00:09:33.529 19:29:26 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # ver2[v]=2 00:09:33.529 19:29:26 spdk_dd.spdk_dd_negative -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:33.529 19:29:26 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:33.529 19:29:26 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # return 0 00:09:33.529 19:29:26 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:33.529 19:29:26 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:33.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.529 --rc genhtml_branch_coverage=1 00:09:33.529 --rc genhtml_function_coverage=1 00:09:33.529 --rc genhtml_legend=1 00:09:33.529 --rc geninfo_all_blocks=1 00:09:33.529 --rc geninfo_unexecuted_blocks=1 00:09:33.529 00:09:33.529 ' 00:09:33.529 19:29:26 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:33.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.529 --rc genhtml_branch_coverage=1 00:09:33.529 --rc genhtml_function_coverage=1 00:09:33.529 --rc genhtml_legend=1 00:09:33.529 --rc geninfo_all_blocks=1 00:09:33.529 --rc geninfo_unexecuted_blocks=1 00:09:33.529 00:09:33.529 ' 00:09:33.529 19:29:26 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:33.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.529 --rc genhtml_branch_coverage=1 00:09:33.529 --rc genhtml_function_coverage=1 00:09:33.529 --rc genhtml_legend=1 00:09:33.529 --rc geninfo_all_blocks=1 00:09:33.529 --rc geninfo_unexecuted_blocks=1 00:09:33.529 00:09:33.529 ' 00:09:33.529 19:29:26 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:33.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.529 --rc genhtml_branch_coverage=1 00:09:33.529 --rc genhtml_function_coverage=1 00:09:33.529 --rc genhtml_legend=1 00:09:33.529 --rc geninfo_all_blocks=1 00:09:33.529 --rc geninfo_unexecuted_blocks=1 00:09:33.529 00:09:33.529 ' 00:09:33.529 19:29:26 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:33.529 19:29:26 spdk_dd.spdk_dd_negative -- scripts/common.sh@15 -- # shopt -s extglob 00:09:33.529 19:29:26 spdk_dd.spdk_dd_negative -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:33.529 19:29:26 spdk_dd.spdk_dd_negative -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:33.529 19:29:26 spdk_dd.spdk_dd_negative -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:33.529 19:29:26 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.529 19:29:26 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.529 19:29:26 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.529 19:29:26 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:09:33.529 19:29:26 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.529 19:29:26 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@210 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:33.530 19:29:26 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@211 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:33.530 19:29:26 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@213 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:33.530 19:29:26 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@214 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:33.530 19:29:26 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@216 -- # run_test dd_invalid_arguments invalid_arguments 00:09:33.530 19:29:26 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:33.530 19:29:26 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:33.530 19:29:26 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:33.530 ************************************ 00:09:33.530 START TEST dd_invalid_arguments 00:09:33.530 ************************************ 00:09:33.530 19:29:26 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1129 -- # invalid_arguments 00:09:33.530 19:29:26 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:09:33.530 19:29:26 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@652 -- # local es=0 00:09:33.530 19:29:26 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:09:33.530 19:29:26 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:33.530 19:29:26 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:33.530 19:29:26 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:33.530 19:29:26 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:33.530 19:29:26 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:33.530 19:29:26 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:33.530 19:29:26 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:33.530 19:29:26 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:33.530 19:29:26 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:09:33.530 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:09:33.530 00:09:33.530 CPU options: 00:09:33.530 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:09:33.530 (like [0,1,10]) 00:09:33.530 --lcores lcore to CPU mapping list. The list is in the format: 00:09:33.530 [<,lcores[@CPUs]>...] 00:09:33.530 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:09:33.530 Within the group, '-' is used for range separator, 00:09:33.530 ',' is used for single number separator. 00:09:33.530 '( )' can be omitted for single element group, 00:09:33.530 '@' can be omitted if cpus and lcores have the same value 00:09:33.530 --disable-cpumask-locks Disable CPU core lock files. 00:09:33.530 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:09:33.530 pollers in the app support interrupt mode) 00:09:33.530 -p, --main-core main (primary) core for DPDK 00:09:33.530 00:09:33.530 Configuration options: 00:09:33.530 -c, --config, --json JSON config file 00:09:33.530 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:09:33.530 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:09:33.530 --wait-for-rpc wait for RPCs to initialize subsystems 00:09:33.530 --rpcs-allowed comma-separated list of permitted RPCS 00:09:33.530 --json-ignore-init-errors don't exit on invalid config entry 00:09:33.530 00:09:33.530 Memory options: 00:09:33.530 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:09:33.530 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:09:33.530 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:09:33.530 -R, --huge-unlink unlink huge files after initialization 00:09:33.530 -n, --mem-channels number of memory channels used for DPDK 00:09:33.530 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:09:33.530 --msg-mempool-size global message memory pool size in count (default: 262143) 00:09:33.530 --no-huge run without using hugepages 00:09:33.530 --enforce-numa enforce NUMA allocations from the specified NUMA node 00:09:33.530 -i, --shm-id shared memory ID (optional) 00:09:33.530 -g, --single-file-segments force creating just one hugetlbfs file 00:09:33.530 00:09:33.530 PCI options: 00:09:33.530 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:09:33.530 -B, --pci-blocked pci addr to block (can be used more than once) 00:09:33.530 -u, --no-pci disable PCI access 00:09:33.530 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:09:33.530 00:09:33.530 Log options: 00:09:33.530 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:09:33.530 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:09:33.530 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 00:09:33.530 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 00:09:33.530 blobfs_rw, fsdev, fsdev_aio, ftl_core, ftl_init, gpt_parse, idxd, ioat, 00:09:33.530 iscsi_init, json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, 00:09:33.530 nvme, nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, 00:09:33.530 sock_posix, spdk_aio_mgr_io, thread, trace, uring, vbdev_delay, 00:09:33.530 vbdev_gpt, vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, 00:09:33.530 vbdev_zone_block, vfio_pci, vfio_user, virtio, virtio_blk, virtio_dev, 00:09:33.530 virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:09:33.530 --silence-noticelog disable notice level logging to stderr 00:09:33.530 00:09:33.530 Trace options: 00:09:33.530 --num-trace-entries number of trace entries for each core, must be power of 2, 00:09:33.530 setting 0 to disable trace (default 32768) 00:09:33.530 Tracepoints vary in size and can use more than one trace entry. 00:09:33.530 -e, --tpoint-group [:] 00:09:33.530 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:09:33.530 [2024-12-05 19:29:26.819066] spdk_dd.c:1478:main: *ERROR*: Invalid arguments 00:09:33.530 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, 00:09:33.530 blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, blob, 00:09:33.530 bdev_raid, scheduler, all). 00:09:33.530 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:09:33.530 a tracepoint group. First tpoint inside a group can be enabled by 00:09:33.530 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:09:33.530 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:09:33.530 in /include/spdk_internal/trace_defs.h 00:09:33.530 00:09:33.530 Other options: 00:09:33.530 -h, --help show this usage 00:09:33.530 -v, --version print SPDK version 00:09:33.530 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:09:33.530 --env-context Opaque context for use of the env implementation 00:09:33.530 00:09:33.530 Application specific: 00:09:33.530 [--------- DD Options ---------] 00:09:33.530 --if Input file. Must specify either --if or --ib. 00:09:33.530 --ib Input bdev. Must specifier either --if or --ib 00:09:33.530 --of Output file. Must specify either --of or --ob. 00:09:33.530 --ob Output bdev. Must specify either --of or --ob. 00:09:33.530 --iflag Input file flags. 00:09:33.530 --oflag Output file flags. 00:09:33.530 --bs I/O unit size (default: 4096) 00:09:33.530 --qd Queue depth (default: 2) 00:09:33.530 --count I/O unit count. The number of I/O units to copy. (default: all) 00:09:33.530 --skip Skip this many I/O units at start of input. (default: 0) 00:09:33.530 --seek Skip this many I/O units at start of output. (default: 0) 00:09:33.530 --aio Force usage of AIO. (by default io_uring is used if available) 00:09:33.530 --sparse Enable hole skipping in input target 00:09:33.530 Available iflag and oflag values: 00:09:33.530 append - append mode 00:09:33.530 direct - use direct I/O for data 00:09:33.530 directory - fail unless a directory 00:09:33.530 dsync - use synchronized I/O for data 00:09:33.530 noatime - do not update access time 00:09:33.530 noctty - do not assign controlling terminal from file 00:09:33.530 nofollow - do not follow symlinks 00:09:33.530 nonblock - use non-blocking I/O 00:09:33.530 sync - use synchronized I/O for data and metadata 00:09:33.530 ************************************ 00:09:33.530 END TEST dd_invalid_arguments 00:09:33.530 ************************************ 00:09:33.530 19:29:26 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@655 -- # es=2 00:09:33.530 19:29:26 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:33.530 19:29:26 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:33.530 19:29:26 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:33.530 00:09:33.530 real 0m0.080s 00:09:33.530 user 0m0.049s 00:09:33.530 sys 0m0.030s 00:09:33.530 19:29:26 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:33.530 19:29:26 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 00:09:33.530 19:29:26 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@217 -- # run_test dd_double_input double_input 00:09:33.530 19:29:26 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:33.530 19:29:26 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:33.530 19:29:26 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:33.530 ************************************ 00:09:33.530 START TEST dd_double_input 00:09:33.530 ************************************ 00:09:33.530 19:29:26 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1129 -- # double_input 00:09:33.530 19:29:26 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:09:33.530 19:29:26 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@652 -- # local es=0 00:09:33.530 19:29:26 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:09:33.530 19:29:26 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:33.530 19:29:26 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:33.530 19:29:26 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:33.530 19:29:26 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:33.530 19:29:26 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:33.530 19:29:26 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:33.530 19:29:26 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:33.530 19:29:26 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:33.530 19:29:26 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:09:33.530 [2024-12-05 19:29:26.950584] spdk_dd.c:1485:main: *ERROR*: You may specify either --if or --ib, but not both. 00:09:33.789 19:29:26 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@655 -- # es=22 00:09:33.789 19:29:26 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:33.789 19:29:26 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:33.789 19:29:26 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:33.789 00:09:33.789 real 0m0.079s 00:09:33.789 user 0m0.045s 00:09:33.789 sys 0m0.032s 00:09:33.789 19:29:26 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:33.789 ************************************ 00:09:33.789 END TEST dd_double_input 00:09:33.789 ************************************ 00:09:33.789 19:29:26 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:09:33.789 19:29:27 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@218 -- # run_test dd_double_output double_output 00:09:33.789 19:29:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:33.789 19:29:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:33.789 19:29:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:33.789 ************************************ 00:09:33.789 START TEST dd_double_output 00:09:33.789 ************************************ 00:09:33.789 19:29:27 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1129 -- # double_output 00:09:33.789 19:29:27 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:09:33.789 19:29:27 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@652 -- # local es=0 00:09:33.789 19:29:27 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:09:33.789 19:29:27 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:33.789 19:29:27 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:33.789 19:29:27 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:33.789 19:29:27 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:33.789 19:29:27 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:33.789 19:29:27 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:33.789 19:29:27 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:33.789 19:29:27 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:33.789 19:29:27 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:09:33.789 [2024-12-05 19:29:27.088018] spdk_dd.c:1491:main: *ERROR*: You may specify either --of or --ob, but not both. 00:09:33.789 19:29:27 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@655 -- # es=22 00:09:33.789 19:29:27 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:33.789 19:29:27 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:33.789 19:29:27 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:33.789 00:09:33.789 real 0m0.085s 00:09:33.789 user 0m0.047s 00:09:33.789 sys 0m0.033s 00:09:33.789 19:29:27 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:33.789 19:29:27 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:09:33.789 ************************************ 00:09:33.789 END TEST dd_double_output 00:09:33.789 ************************************ 00:09:33.789 19:29:27 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@219 -- # run_test dd_no_input no_input 00:09:33.789 19:29:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:33.789 19:29:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:33.789 19:29:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:33.789 ************************************ 00:09:33.789 START TEST dd_no_input 00:09:33.789 ************************************ 00:09:33.789 19:29:27 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1129 -- # no_input 00:09:33.789 19:29:27 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:09:33.789 19:29:27 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@652 -- # local es=0 00:09:33.789 19:29:27 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:09:33.789 19:29:27 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:33.789 19:29:27 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:33.789 19:29:27 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:33.789 19:29:27 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:33.789 19:29:27 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:33.789 19:29:27 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:33.789 19:29:27 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:33.789 19:29:27 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:33.789 19:29:27 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:09:33.789 [2024-12-05 19:29:27.222550] spdk_dd.c:1497:main: *ERROR*: You must specify either --if or --ib 00:09:34.047 19:29:27 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@655 -- # es=22 00:09:34.047 19:29:27 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:34.047 ************************************ 00:09:34.047 END TEST dd_no_input 00:09:34.047 ************************************ 00:09:34.047 19:29:27 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:34.047 19:29:27 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:34.047 00:09:34.047 real 0m0.082s 00:09:34.047 user 0m0.052s 00:09:34.047 sys 0m0.028s 00:09:34.047 19:29:27 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:34.047 19:29:27 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:09:34.047 19:29:27 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@220 -- # run_test dd_no_output no_output 00:09:34.047 19:29:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:34.047 19:29:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:34.047 19:29:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:34.047 ************************************ 00:09:34.047 START TEST dd_no_output 00:09:34.047 ************************************ 00:09:34.047 19:29:27 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1129 -- # no_output 00:09:34.047 19:29:27 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:34.047 19:29:27 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@652 -- # local es=0 00:09:34.047 19:29:27 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:34.047 19:29:27 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:34.047 19:29:27 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:34.047 19:29:27 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:34.047 19:29:27 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:34.047 19:29:27 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:34.047 19:29:27 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:34.047 19:29:27 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:34.047 19:29:27 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:34.047 19:29:27 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:34.047 [2024-12-05 19:29:27.369023] spdk_dd.c:1503:main: *ERROR*: You must specify either --of or --ob 00:09:34.047 19:29:27 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@655 -- # es=22 00:09:34.047 19:29:27 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:34.047 19:29:27 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:34.047 19:29:27 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:34.047 00:09:34.047 real 0m0.094s 00:09:34.047 user 0m0.056s 00:09:34.047 sys 0m0.037s 00:09:34.047 ************************************ 00:09:34.047 END TEST dd_no_output 00:09:34.047 ************************************ 00:09:34.047 19:29:27 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:34.047 19:29:27 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:09:34.047 19:29:27 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@221 -- # run_test dd_wrong_blocksize wrong_blocksize 00:09:34.047 19:29:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:34.047 19:29:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:34.047 19:29:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:34.047 ************************************ 00:09:34.047 START TEST dd_wrong_blocksize 00:09:34.047 ************************************ 00:09:34.047 19:29:27 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1129 -- # wrong_blocksize 00:09:34.047 19:29:27 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:09:34.047 19:29:27 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@652 -- # local es=0 00:09:34.047 19:29:27 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:09:34.047 19:29:27 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:34.047 19:29:27 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:34.047 19:29:27 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:34.047 19:29:27 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:34.047 19:29:27 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:34.047 19:29:27 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:34.047 19:29:27 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:34.047 19:29:27 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:34.047 19:29:27 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:09:34.305 [2024-12-05 19:29:27.502718] spdk_dd.c:1509:main: *ERROR*: Invalid --bs value 00:09:34.305 19:29:27 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@655 -- # es=22 00:09:34.306 19:29:27 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:34.306 19:29:27 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:34.306 19:29:27 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:34.306 00:09:34.306 real 0m0.073s 00:09:34.306 user 0m0.041s 00:09:34.306 sys 0m0.032s 00:09:34.306 19:29:27 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:34.306 ************************************ 00:09:34.306 END TEST dd_wrong_blocksize 00:09:34.306 ************************************ 00:09:34.306 19:29:27 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:09:34.306 19:29:27 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@222 -- # run_test dd_smaller_blocksize smaller_blocksize 00:09:34.306 19:29:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:34.306 19:29:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:34.306 19:29:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:34.306 ************************************ 00:09:34.306 START TEST dd_smaller_blocksize 00:09:34.306 ************************************ 00:09:34.306 19:29:27 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1129 -- # smaller_blocksize 00:09:34.306 19:29:27 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:09:34.306 19:29:27 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@652 -- # local es=0 00:09:34.306 19:29:27 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:09:34.306 19:29:27 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:34.306 19:29:27 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:34.306 19:29:27 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:34.306 19:29:27 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:34.306 19:29:27 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:34.306 19:29:27 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:34.306 19:29:27 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:34.306 19:29:27 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:34.306 19:29:27 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:09:34.306 [2024-12-05 19:29:27.635433] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:09:34.306 [2024-12-05 19:29:27.636127] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61729 ] 00:09:34.564 [2024-12-05 19:29:27.786105] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:34.564 [2024-12-05 19:29:27.846822] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:34.564 [2024-12-05 19:29:27.901429] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:34.822 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:09:35.085 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:09:35.085 [2024-12-05 19:29:28.506553] spdk_dd.c:1182:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:09:35.085 [2024-12-05 19:29:28.506862] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:35.344 [2024-12-05 19:29:28.634827] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:09:35.344 19:29:28 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@655 -- # es=244 00:09:35.344 19:29:28 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:35.344 19:29:28 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@664 -- # es=116 00:09:35.344 ************************************ 00:09:35.344 END TEST dd_smaller_blocksize 00:09:35.344 ************************************ 00:09:35.344 19:29:28 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@665 -- # case "$es" in 00:09:35.344 19:29:28 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@672 -- # es=1 00:09:35.344 19:29:28 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:35.344 00:09:35.344 real 0m1.133s 00:09:35.344 user 0m0.414s 00:09:35.344 sys 0m0.609s 00:09:35.344 19:29:28 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:35.344 19:29:28 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:09:35.344 19:29:28 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@223 -- # run_test dd_invalid_count invalid_count 00:09:35.344 19:29:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:35.344 19:29:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:35.344 19:29:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:35.344 ************************************ 00:09:35.344 START TEST dd_invalid_count 00:09:35.344 ************************************ 00:09:35.344 19:29:28 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1129 -- # invalid_count 00:09:35.344 19:29:28 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:09:35.344 19:29:28 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@652 -- # local es=0 00:09:35.344 19:29:28 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:09:35.344 19:29:28 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:35.344 19:29:28 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:35.344 19:29:28 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:35.344 19:29:28 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:35.344 19:29:28 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:35.344 19:29:28 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:35.344 19:29:28 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:35.344 19:29:28 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:35.344 19:29:28 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:09:35.603 [2024-12-05 19:29:28.819376] spdk_dd.c:1515:main: *ERROR*: Invalid --count value 00:09:35.603 19:29:28 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@655 -- # es=22 00:09:35.603 19:29:28 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:35.603 19:29:28 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:35.603 19:29:28 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:35.603 00:09:35.603 real 0m0.083s 00:09:35.603 user 0m0.053s 00:09:35.603 sys 0m0.028s 00:09:35.603 19:29:28 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:35.603 ************************************ 00:09:35.603 END TEST dd_invalid_count 00:09:35.603 ************************************ 00:09:35.603 19:29:28 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:09:35.603 19:29:28 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@224 -- # run_test dd_invalid_oflag invalid_oflag 00:09:35.603 19:29:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:35.603 19:29:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:35.603 19:29:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:35.603 ************************************ 00:09:35.603 START TEST dd_invalid_oflag 00:09:35.603 ************************************ 00:09:35.603 19:29:28 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1129 -- # invalid_oflag 00:09:35.603 19:29:28 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:09:35.603 19:29:28 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@652 -- # local es=0 00:09:35.603 19:29:28 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:09:35.603 19:29:28 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:35.603 19:29:28 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:35.603 19:29:28 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:35.603 19:29:28 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:35.603 19:29:28 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:35.603 19:29:28 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:35.603 19:29:28 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:35.603 19:29:28 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:35.603 19:29:28 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:09:35.604 [2024-12-05 19:29:28.974930] spdk_dd.c:1521:main: *ERROR*: --oflags may be used only with --of 00:09:35.604 19:29:29 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@655 -- # es=22 00:09:35.604 19:29:29 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:35.604 19:29:29 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:35.604 19:29:29 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:35.604 00:09:35.604 real 0m0.118s 00:09:35.604 user 0m0.070s 00:09:35.604 sys 0m0.044s 00:09:35.604 19:29:29 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:35.604 ************************************ 00:09:35.604 END TEST dd_invalid_oflag 00:09:35.604 ************************************ 00:09:35.604 19:29:29 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:09:35.863 19:29:29 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@225 -- # run_test dd_invalid_iflag invalid_iflag 00:09:35.863 19:29:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:35.863 19:29:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:35.863 19:29:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:35.863 ************************************ 00:09:35.864 START TEST dd_invalid_iflag 00:09:35.864 ************************************ 00:09:35.864 19:29:29 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1129 -- # invalid_iflag 00:09:35.864 19:29:29 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:09:35.864 19:29:29 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@652 -- # local es=0 00:09:35.864 19:29:29 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:09:35.864 19:29:29 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:35.864 19:29:29 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:35.864 19:29:29 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:35.864 19:29:29 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:35.864 19:29:29 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:35.864 19:29:29 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:35.864 19:29:29 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:35.864 19:29:29 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:35.864 19:29:29 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:09:35.864 [2024-12-05 19:29:29.121591] spdk_dd.c:1527:main: *ERROR*: --iflags may be used only with --if 00:09:35.864 19:29:29 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@655 -- # es=22 00:09:35.864 19:29:29 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:35.864 19:29:29 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:35.864 19:29:29 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:35.864 00:09:35.864 real 0m0.074s 00:09:35.864 user 0m0.048s 00:09:35.864 sys 0m0.026s 00:09:35.864 19:29:29 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:35.864 ************************************ 00:09:35.864 END TEST dd_invalid_iflag 00:09:35.864 ************************************ 00:09:35.864 19:29:29 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:09:35.864 19:29:29 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@226 -- # run_test dd_unknown_flag unknown_flag 00:09:35.864 19:29:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:35.864 19:29:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:35.864 19:29:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:35.864 ************************************ 00:09:35.864 START TEST dd_unknown_flag 00:09:35.864 ************************************ 00:09:35.864 19:29:29 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1129 -- # unknown_flag 00:09:35.864 19:29:29 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:09:35.864 19:29:29 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@652 -- # local es=0 00:09:35.864 19:29:29 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:09:35.864 19:29:29 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:35.864 19:29:29 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:35.864 19:29:29 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:35.864 19:29:29 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:35.864 19:29:29 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:35.864 19:29:29 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:35.864 19:29:29 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:35.864 19:29:29 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:35.864 19:29:29 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:09:35.864 [2024-12-05 19:29:29.253686] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:09:35.864 [2024-12-05 19:29:29.253798] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61826 ] 00:09:36.124 [2024-12-05 19:29:29.402537] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:36.124 [2024-12-05 19:29:29.467538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:36.124 [2024-12-05 19:29:29.524733] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:36.384 [2024-12-05 19:29:29.565035] spdk_dd.c: 984:parse_flags: *ERROR*: Unknown file flag: -1 00:09:36.384 [2024-12-05 19:29:29.565281] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:36.384 [2024-12-05 19:29:29.565368] spdk_dd.c: 984:parse_flags: *ERROR*: Unknown file flag: -1 00:09:36.384 [2024-12-05 19:29:29.565383] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:36.384 [2024-12-05 19:29:29.565581] spdk_dd.c:1216:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 00:09:36.384 [2024-12-05 19:29:29.565598] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:36.384 [2024-12-05 19:29:29.565647] app.c:1049:app_stop: *NOTICE*: spdk_app_stop called twice 00:09:36.384 [2024-12-05 19:29:29.565658] app.c:1049:app_stop: *NOTICE*: spdk_app_stop called twice 00:09:36.384 [2024-12-05 19:29:29.693174] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:09:36.384 19:29:29 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@655 -- # es=234 00:09:36.384 19:29:29 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:36.384 19:29:29 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@664 -- # es=106 00:09:36.384 19:29:29 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@665 -- # case "$es" in 00:09:36.384 19:29:29 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@672 -- # es=1 00:09:36.385 19:29:29 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:36.385 00:09:36.385 real 0m0.574s 00:09:36.385 user 0m0.318s 00:09:36.385 sys 0m0.167s 00:09:36.385 19:29:29 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:36.385 ************************************ 00:09:36.385 END TEST dd_unknown_flag 00:09:36.385 ************************************ 00:09:36.385 19:29:29 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:09:36.385 19:29:29 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@227 -- # run_test dd_invalid_json invalid_json 00:09:36.385 19:29:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:36.385 19:29:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:36.385 19:29:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:36.385 ************************************ 00:09:36.385 START TEST dd_invalid_json 00:09:36.385 ************************************ 00:09:36.385 19:29:29 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1129 -- # invalid_json 00:09:36.385 19:29:29 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:09:36.385 19:29:29 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@652 -- # local es=0 00:09:36.385 19:29:29 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # : 00:09:36.385 19:29:29 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:09:36.385 19:29:29 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:36.644 19:29:29 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:36.644 19:29:29 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:36.644 19:29:29 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:36.644 19:29:29 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:36.644 19:29:29 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:36.644 19:29:29 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:36.644 19:29:29 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:36.644 19:29:29 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:09:36.644 [2024-12-05 19:29:29.872717] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:09:36.644 [2024-12-05 19:29:29.872821] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61855 ] 00:09:36.644 [2024-12-05 19:29:30.012500] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:36.644 [2024-12-05 19:29:30.068982] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:36.644 [2024-12-05 19:29:30.069077] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:09:36.644 [2024-12-05 19:29:30.069094] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:09:36.645 [2024-12-05 19:29:30.069103] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:36.645 [2024-12-05 19:29:30.069173] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:09:36.904 19:29:30 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@655 -- # es=234 00:09:36.904 19:29:30 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:36.904 19:29:30 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@664 -- # es=106 00:09:36.904 ************************************ 00:09:36.904 END TEST dd_invalid_json 00:09:36.904 ************************************ 00:09:36.904 19:29:30 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@665 -- # case "$es" in 00:09:36.904 19:29:30 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@672 -- # es=1 00:09:36.904 19:29:30 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:36.904 00:09:36.904 real 0m0.316s 00:09:36.904 user 0m0.156s 00:09:36.904 sys 0m0.058s 00:09:36.904 19:29:30 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:36.904 19:29:30 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:09:36.904 19:29:30 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@228 -- # run_test dd_invalid_seek invalid_seek 00:09:36.904 19:29:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:36.904 19:29:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:36.904 19:29:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:36.904 ************************************ 00:09:36.904 START TEST dd_invalid_seek 00:09:36.904 ************************************ 00:09:36.904 19:29:30 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1129 -- # invalid_seek 00:09:36.904 19:29:30 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@102 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:09:36.904 19:29:30 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:09:36.904 19:29:30 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # local -A method_bdev_malloc_create_0 00:09:36.904 19:29:30 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@108 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:09:36.904 19:29:30 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:09:36.904 19:29:30 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # local -A method_bdev_malloc_create_1 00:09:36.904 19:29:30 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:09:36.904 19:29:30 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@652 -- # local es=0 00:09:36.904 19:29:30 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # gen_conf 00:09:36.904 19:29:30 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:09:36.904 19:29:30 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/common.sh@31 -- # xtrace_disable 00:09:36.904 19:29:30 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:36.904 19:29:30 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:09:36.904 19:29:30 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:36.904 19:29:30 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:36.904 19:29:30 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:36.904 19:29:30 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:36.904 19:29:30 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:36.904 19:29:30 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:36.904 19:29:30 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:36.904 19:29:30 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:09:36.904 { 00:09:36.904 "subsystems": [ 00:09:36.904 { 00:09:36.904 "subsystem": "bdev", 00:09:36.904 "config": [ 00:09:36.904 { 00:09:36.904 "params": { 00:09:36.904 "block_size": 512, 00:09:36.904 "num_blocks": 512, 00:09:36.904 "name": "malloc0" 00:09:36.904 }, 00:09:36.904 "method": "bdev_malloc_create" 00:09:36.904 }, 00:09:36.904 { 00:09:36.904 "params": { 00:09:36.904 "block_size": 512, 00:09:36.904 "num_blocks": 512, 00:09:36.904 "name": "malloc1" 00:09:36.904 }, 00:09:36.904 "method": "bdev_malloc_create" 00:09:36.904 }, 00:09:36.904 { 00:09:36.904 "method": "bdev_wait_for_examine" 00:09:36.904 } 00:09:36.904 ] 00:09:36.904 } 00:09:36.904 ] 00:09:36.904 } 00:09:36.904 [2024-12-05 19:29:30.247558] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:09:36.904 [2024-12-05 19:29:30.247859] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61879 ] 00:09:37.162 [2024-12-05 19:29:30.395523] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:37.162 [2024-12-05 19:29:30.454105] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:37.162 [2024-12-05 19:29:30.512419] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:37.162 [2024-12-05 19:29:30.579379] spdk_dd.c:1143:dd_run: *ERROR*: --seek value too big (513) - only 512 blocks available in output 00:09:37.162 [2024-12-05 19:29:30.579432] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:37.466 [2024-12-05 19:29:30.704933] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:09:37.466 19:29:30 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@655 -- # es=228 00:09:37.466 19:29:30 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:37.466 19:29:30 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@664 -- # es=100 00:09:37.466 19:29:30 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@665 -- # case "$es" in 00:09:37.466 19:29:30 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@672 -- # es=1 00:09:37.466 19:29:30 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:37.466 00:09:37.466 real 0m0.586s 00:09:37.466 user 0m0.358s 00:09:37.466 sys 0m0.178s 00:09:37.466 19:29:30 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:37.466 ************************************ 00:09:37.466 END TEST dd_invalid_seek 00:09:37.466 ************************************ 00:09:37.466 19:29:30 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:09:37.466 19:29:30 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@229 -- # run_test dd_invalid_skip invalid_skip 00:09:37.466 19:29:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:37.466 19:29:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:37.466 19:29:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:37.466 ************************************ 00:09:37.466 START TEST dd_invalid_skip 00:09:37.466 ************************************ 00:09:37.466 19:29:30 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1129 -- # invalid_skip 00:09:37.466 19:29:30 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@125 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:09:37.466 19:29:30 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:09:37.466 19:29:30 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # local -A method_bdev_malloc_create_0 00:09:37.466 19:29:30 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@131 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:09:37.466 19:29:30 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:09:37.466 19:29:30 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # local -A method_bdev_malloc_create_1 00:09:37.466 19:29:30 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:09:37.466 19:29:30 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # gen_conf 00:09:37.466 19:29:30 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@652 -- # local es=0 00:09:37.466 19:29:30 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:09:37.466 19:29:30 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:37.467 19:29:30 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/common.sh@31 -- # xtrace_disable 00:09:37.467 19:29:30 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:09:37.467 19:29:30 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:37.467 19:29:30 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:37.467 19:29:30 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:37.467 19:29:30 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:37.467 19:29:30 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:37.467 19:29:30 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:37.467 19:29:30 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:37.467 19:29:30 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:09:37.467 { 00:09:37.467 "subsystems": [ 00:09:37.467 { 00:09:37.467 "subsystem": "bdev", 00:09:37.467 "config": [ 00:09:37.467 { 00:09:37.467 "params": { 00:09:37.467 "block_size": 512, 00:09:37.467 "num_blocks": 512, 00:09:37.467 "name": "malloc0" 00:09:37.467 }, 00:09:37.467 "method": "bdev_malloc_create" 00:09:37.467 }, 00:09:37.467 { 00:09:37.467 "params": { 00:09:37.467 "block_size": 512, 00:09:37.467 "num_blocks": 512, 00:09:37.467 "name": "malloc1" 00:09:37.467 }, 00:09:37.467 "method": "bdev_malloc_create" 00:09:37.467 }, 00:09:37.467 { 00:09:37.467 "method": "bdev_wait_for_examine" 00:09:37.467 } 00:09:37.467 ] 00:09:37.467 } 00:09:37.467 ] 00:09:37.467 } 00:09:37.467 [2024-12-05 19:29:30.882944] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:09:37.467 [2024-12-05 19:29:30.883049] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61918 ] 00:09:37.725 [2024-12-05 19:29:31.030609] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:37.725 [2024-12-05 19:29:31.089779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:37.725 [2024-12-05 19:29:31.144216] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:37.984 [2024-12-05 19:29:31.208445] spdk_dd.c:1100:dd_run: *ERROR*: --skip value too big (513) - only 512 blocks available in input 00:09:37.984 [2024-12-05 19:29:31.208507] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:37.984 [2024-12-05 19:29:31.333278] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:09:37.984 19:29:31 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@655 -- # es=228 00:09:37.984 19:29:31 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:37.984 19:29:31 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@664 -- # es=100 00:09:37.984 19:29:31 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@665 -- # case "$es" in 00:09:37.984 19:29:31 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@672 -- # es=1 00:09:37.984 19:29:31 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:37.984 ************************************ 00:09:37.984 END TEST dd_invalid_skip 00:09:37.984 ************************************ 00:09:37.984 00:09:37.984 real 0m0.575s 00:09:37.984 user 0m0.374s 00:09:37.984 sys 0m0.158s 00:09:37.984 19:29:31 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:37.984 19:29:31 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:09:38.242 19:29:31 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@230 -- # run_test dd_invalid_input_count invalid_input_count 00:09:38.242 19:29:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:38.242 19:29:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:38.242 19:29:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:38.242 ************************************ 00:09:38.242 START TEST dd_invalid_input_count 00:09:38.242 ************************************ 00:09:38.242 19:29:31 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1129 -- # invalid_input_count 00:09:38.242 19:29:31 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@149 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:09:38.242 19:29:31 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:09:38.243 19:29:31 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # local -A method_bdev_malloc_create_0 00:09:38.243 19:29:31 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@155 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:09:38.243 19:29:31 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:09:38.243 19:29:31 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # local -A method_bdev_malloc_create_1 00:09:38.243 19:29:31 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:09:38.243 19:29:31 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@652 -- # local es=0 00:09:38.243 19:29:31 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:09:38.243 19:29:31 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:38.243 19:29:31 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # gen_conf 00:09:38.243 19:29:31 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/common.sh@31 -- # xtrace_disable 00:09:38.243 19:29:31 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:09:38.243 19:29:31 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:38.243 19:29:31 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:38.243 19:29:31 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:38.243 19:29:31 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:38.243 19:29:31 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:38.243 19:29:31 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:38.243 19:29:31 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:38.243 19:29:31 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:09:38.243 { 00:09:38.243 "subsystems": [ 00:09:38.243 { 00:09:38.243 "subsystem": "bdev", 00:09:38.243 "config": [ 00:09:38.243 { 00:09:38.243 "params": { 00:09:38.243 "block_size": 512, 00:09:38.243 "num_blocks": 512, 00:09:38.243 "name": "malloc0" 00:09:38.243 }, 00:09:38.243 "method": "bdev_malloc_create" 00:09:38.243 }, 00:09:38.243 { 00:09:38.243 "params": { 00:09:38.243 "block_size": 512, 00:09:38.243 "num_blocks": 512, 00:09:38.243 "name": "malloc1" 00:09:38.243 }, 00:09:38.243 "method": "bdev_malloc_create" 00:09:38.243 }, 00:09:38.243 { 00:09:38.243 "method": "bdev_wait_for_examine" 00:09:38.243 } 00:09:38.243 ] 00:09:38.243 } 00:09:38.243 ] 00:09:38.243 } 00:09:38.243 [2024-12-05 19:29:31.516599] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:09:38.243 [2024-12-05 19:29:31.516782] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61951 ] 00:09:38.243 [2024-12-05 19:29:31.670979] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:38.501 [2024-12-05 19:29:31.732921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:38.501 [2024-12-05 19:29:31.792179] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:38.501 [2024-12-05 19:29:31.857968] spdk_dd.c:1108:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available from input 00:09:38.501 [2024-12-05 19:29:31.858046] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:38.760 [2024-12-05 19:29:31.995893] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:09:38.760 19:29:32 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@655 -- # es=228 00:09:38.760 19:29:32 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:38.760 19:29:32 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@664 -- # es=100 00:09:38.760 19:29:32 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@665 -- # case "$es" in 00:09:38.760 19:29:32 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@672 -- # es=1 00:09:38.760 19:29:32 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:38.760 00:09:38.760 real 0m0.614s 00:09:38.760 user 0m0.385s 00:09:38.760 sys 0m0.180s 00:09:38.761 19:29:32 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:38.761 19:29:32 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:09:38.761 ************************************ 00:09:38.761 END TEST dd_invalid_input_count 00:09:38.761 ************************************ 00:09:38.761 19:29:32 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@231 -- # run_test dd_invalid_output_count invalid_output_count 00:09:38.761 19:29:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:38.761 19:29:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:38.761 19:29:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:38.761 ************************************ 00:09:38.761 START TEST dd_invalid_output_count 00:09:38.761 ************************************ 00:09:38.761 19:29:32 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1129 -- # invalid_output_count 00:09:38.761 19:29:32 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@173 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:09:38.761 19:29:32 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:09:38.761 19:29:32 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # local -A method_bdev_malloc_create_0 00:09:38.761 19:29:32 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:09:38.761 19:29:32 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # gen_conf 00:09:38.761 19:29:32 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/common.sh@31 -- # xtrace_disable 00:09:38.761 19:29:32 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@652 -- # local es=0 00:09:38.761 19:29:32 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:09:38.761 19:29:32 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:09:38.761 19:29:32 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:38.761 19:29:32 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:38.761 19:29:32 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:38.761 19:29:32 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:38.761 19:29:32 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:38.761 19:29:32 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:38.761 19:29:32 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:38.761 19:29:32 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:38.761 19:29:32 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:09:38.761 [2024-12-05 19:29:32.174357] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:09:38.761 [2024-12-05 19:29:32.174454] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61985 ] 00:09:38.761 { 00:09:38.761 "subsystems": [ 00:09:38.761 { 00:09:38.761 "subsystem": "bdev", 00:09:38.761 "config": [ 00:09:38.761 { 00:09:38.761 "params": { 00:09:38.761 "block_size": 512, 00:09:38.761 "num_blocks": 512, 00:09:38.761 "name": "malloc0" 00:09:38.761 }, 00:09:38.761 "method": "bdev_malloc_create" 00:09:38.761 }, 00:09:38.761 { 00:09:38.761 "method": "bdev_wait_for_examine" 00:09:38.761 } 00:09:38.761 ] 00:09:38.761 } 00:09:38.761 ] 00:09:38.761 } 00:09:39.020 [2024-12-05 19:29:32.316622] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:39.020 [2024-12-05 19:29:32.379527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:39.020 [2024-12-05 19:29:32.438723] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:39.280 [2024-12-05 19:29:32.496740] spdk_dd.c:1150:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available in output 00:09:39.280 [2024-12-05 19:29:32.496810] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:39.280 [2024-12-05 19:29:32.621400] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:09:39.280 19:29:32 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@655 -- # es=228 00:09:39.280 19:29:32 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:39.280 19:29:32 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@664 -- # es=100 00:09:39.280 19:29:32 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@665 -- # case "$es" in 00:09:39.280 19:29:32 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@672 -- # es=1 00:09:39.280 19:29:32 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:39.280 00:09:39.280 real 0m0.575s 00:09:39.280 user 0m0.373s 00:09:39.280 sys 0m0.161s 00:09:39.280 19:29:32 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:39.280 ************************************ 00:09:39.280 END TEST dd_invalid_output_count 00:09:39.280 19:29:32 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:09:39.280 ************************************ 00:09:39.540 19:29:32 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@232 -- # run_test dd_bs_not_multiple bs_not_multiple 00:09:39.540 19:29:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:39.540 19:29:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:39.540 19:29:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:39.540 ************************************ 00:09:39.540 START TEST dd_bs_not_multiple 00:09:39.540 ************************************ 00:09:39.540 19:29:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1129 -- # bs_not_multiple 00:09:39.540 19:29:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@190 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:09:39.540 19:29:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:09:39.540 19:29:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # local -A method_bdev_malloc_create_0 00:09:39.540 19:29:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@196 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:09:39.540 19:29:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:09:39.540 19:29:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # local -A method_bdev_malloc_create_1 00:09:39.540 19:29:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:09:39.540 19:29:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # gen_conf 00:09:39.540 19:29:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@652 -- # local es=0 00:09:39.540 19:29:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:09:39.540 19:29:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/common.sh@31 -- # xtrace_disable 00:09:39.540 19:29:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:39.540 19:29:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:09:39.540 19:29:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:39.540 19:29:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:39.540 19:29:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:39.540 19:29:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:39.540 19:29:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:39.540 19:29:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:39.540 19:29:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:39.540 19:29:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:09:39.540 { 00:09:39.540 "subsystems": [ 00:09:39.540 { 00:09:39.540 "subsystem": "bdev", 00:09:39.540 "config": [ 00:09:39.540 { 00:09:39.540 "params": { 00:09:39.540 "block_size": 512, 00:09:39.540 "num_blocks": 512, 00:09:39.540 "name": "malloc0" 00:09:39.540 }, 00:09:39.540 "method": "bdev_malloc_create" 00:09:39.540 }, 00:09:39.540 { 00:09:39.540 "params": { 00:09:39.540 "block_size": 512, 00:09:39.540 "num_blocks": 512, 00:09:39.540 "name": "malloc1" 00:09:39.540 }, 00:09:39.540 "method": "bdev_malloc_create" 00:09:39.540 }, 00:09:39.540 { 00:09:39.540 "method": "bdev_wait_for_examine" 00:09:39.540 } 00:09:39.540 ] 00:09:39.540 } 00:09:39.540 ] 00:09:39.540 } 00:09:39.540 [2024-12-05 19:29:32.806877] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:09:39.540 [2024-12-05 19:29:32.807024] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62022 ] 00:09:39.540 [2024-12-05 19:29:32.955058] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:39.800 [2024-12-05 19:29:33.014494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:39.800 [2024-12-05 19:29:33.073886] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:39.800 [2024-12-05 19:29:33.140011] spdk_dd.c:1166:dd_run: *ERROR*: --bs value must be a multiple of input native block size (512) 00:09:39.800 [2024-12-05 19:29:33.140108] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:40.059 [2024-12-05 19:29:33.265884] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:09:40.059 19:29:33 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@655 -- # es=234 00:09:40.059 19:29:33 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:40.059 19:29:33 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@664 -- # es=106 00:09:40.059 19:29:33 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@665 -- # case "$es" in 00:09:40.059 19:29:33 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@672 -- # es=1 00:09:40.059 19:29:33 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:40.059 00:09:40.059 real 0m0.587s 00:09:40.059 user 0m0.376s 00:09:40.059 sys 0m0.168s 00:09:40.059 ************************************ 00:09:40.059 END TEST dd_bs_not_multiple 00:09:40.059 ************************************ 00:09:40.059 19:29:33 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:40.059 19:29:33 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:09:40.059 00:09:40.059 real 0m6.844s 00:09:40.059 user 0m3.628s 00:09:40.059 sys 0m2.618s 00:09:40.059 19:29:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:40.059 ************************************ 00:09:40.059 END TEST spdk_dd_negative 00:09:40.059 ************************************ 00:09:40.059 19:29:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:40.059 00:09:40.059 real 1m18.953s 00:09:40.060 user 0m50.062s 00:09:40.060 sys 0m35.725s 00:09:40.060 19:29:33 spdk_dd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:40.060 19:29:33 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:09:40.060 ************************************ 00:09:40.060 END TEST spdk_dd 00:09:40.060 ************************************ 00:09:40.060 19:29:33 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:09:40.060 19:29:33 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:09:40.060 19:29:33 -- spdk/autotest.sh@260 -- # timing_exit lib 00:09:40.060 19:29:33 -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:40.060 19:29:33 -- common/autotest_common.sh@10 -- # set +x 00:09:40.060 19:29:33 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:09:40.060 19:29:33 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:09:40.060 19:29:33 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:09:40.060 19:29:33 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:09:40.060 19:29:33 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:09:40.060 19:29:33 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:09:40.060 19:29:33 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:09:40.060 19:29:33 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:40.060 19:29:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:40.060 19:29:33 -- common/autotest_common.sh@10 -- # set +x 00:09:40.319 ************************************ 00:09:40.319 START TEST nvmf_tcp 00:09:40.319 ************************************ 00:09:40.319 19:29:33 nvmf_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:09:40.319 * Looking for test storage... 00:09:40.319 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:09:40.319 19:29:33 nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:40.319 19:29:33 nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:09:40.319 19:29:33 nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:40.319 19:29:33 nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:40.319 19:29:33 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:40.319 19:29:33 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:40.319 19:29:33 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:40.319 19:29:33 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:09:40.319 19:29:33 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:09:40.319 19:29:33 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:09:40.319 19:29:33 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:09:40.319 19:29:33 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:09:40.319 19:29:33 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:09:40.319 19:29:33 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:09:40.319 19:29:33 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:40.319 19:29:33 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:09:40.319 19:29:33 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:09:40.319 19:29:33 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:40.319 19:29:33 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:40.319 19:29:33 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:09:40.319 19:29:33 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:09:40.319 19:29:33 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:40.319 19:29:33 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:09:40.319 19:29:33 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:09:40.319 19:29:33 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:09:40.319 19:29:33 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:09:40.319 19:29:33 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:40.319 19:29:33 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:09:40.319 19:29:33 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:09:40.319 19:29:33 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:40.319 19:29:33 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:40.319 19:29:33 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:09:40.319 19:29:33 nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:40.319 19:29:33 nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:40.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.319 --rc genhtml_branch_coverage=1 00:09:40.319 --rc genhtml_function_coverage=1 00:09:40.319 --rc genhtml_legend=1 00:09:40.319 --rc geninfo_all_blocks=1 00:09:40.319 --rc geninfo_unexecuted_blocks=1 00:09:40.319 00:09:40.319 ' 00:09:40.319 19:29:33 nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:40.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.319 --rc genhtml_branch_coverage=1 00:09:40.319 --rc genhtml_function_coverage=1 00:09:40.319 --rc genhtml_legend=1 00:09:40.319 --rc geninfo_all_blocks=1 00:09:40.319 --rc geninfo_unexecuted_blocks=1 00:09:40.319 00:09:40.319 ' 00:09:40.319 19:29:33 nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:40.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.319 --rc genhtml_branch_coverage=1 00:09:40.319 --rc genhtml_function_coverage=1 00:09:40.319 --rc genhtml_legend=1 00:09:40.319 --rc geninfo_all_blocks=1 00:09:40.319 --rc geninfo_unexecuted_blocks=1 00:09:40.319 00:09:40.319 ' 00:09:40.319 19:29:33 nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:40.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.319 --rc genhtml_branch_coverage=1 00:09:40.319 --rc genhtml_function_coverage=1 00:09:40.319 --rc genhtml_legend=1 00:09:40.319 --rc geninfo_all_blocks=1 00:09:40.319 --rc geninfo_unexecuted_blocks=1 00:09:40.319 00:09:40.319 ' 00:09:40.319 19:29:33 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:09:40.319 19:29:33 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:09:40.319 19:29:33 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:09:40.319 19:29:33 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:40.319 19:29:33 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:40.319 19:29:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:40.319 ************************************ 00:09:40.319 START TEST nvmf_target_core 00:09:40.319 ************************************ 00:09:40.319 19:29:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:09:40.578 * Looking for test storage... 00:09:40.578 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:09:40.579 19:29:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:40.579 19:29:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lcov --version 00:09:40.579 19:29:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:40.579 19:29:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:40.579 19:29:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:40.579 19:29:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:40.579 19:29:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:40.579 19:29:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:09:40.579 19:29:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:09:40.579 19:29:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:09:40.579 19:29:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:09:40.579 19:29:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:09:40.579 19:29:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:09:40.579 19:29:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:09:40.579 19:29:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:40.579 19:29:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:09:40.579 19:29:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:09:40.579 19:29:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:40.579 19:29:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:40.579 19:29:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:09:40.579 19:29:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:09:40.579 19:29:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:40.579 19:29:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:09:40.579 19:29:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:09:40.579 19:29:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:09:40.579 19:29:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:09:40.579 19:29:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:40.579 19:29:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:09:40.579 19:29:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:09:40.579 19:29:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:40.579 19:29:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:40.579 19:29:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:09:40.579 19:29:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:40.579 19:29:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:40.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.579 --rc genhtml_branch_coverage=1 00:09:40.579 --rc genhtml_function_coverage=1 00:09:40.579 --rc genhtml_legend=1 00:09:40.579 --rc geninfo_all_blocks=1 00:09:40.579 --rc geninfo_unexecuted_blocks=1 00:09:40.579 00:09:40.579 ' 00:09:40.579 19:29:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:40.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.579 --rc genhtml_branch_coverage=1 00:09:40.579 --rc genhtml_function_coverage=1 00:09:40.579 --rc genhtml_legend=1 00:09:40.579 --rc geninfo_all_blocks=1 00:09:40.579 --rc geninfo_unexecuted_blocks=1 00:09:40.579 00:09:40.579 ' 00:09:40.579 19:29:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:40.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.579 --rc genhtml_branch_coverage=1 00:09:40.579 --rc genhtml_function_coverage=1 00:09:40.579 --rc genhtml_legend=1 00:09:40.579 --rc geninfo_all_blocks=1 00:09:40.579 --rc geninfo_unexecuted_blocks=1 00:09:40.579 00:09:40.579 ' 00:09:40.579 19:29:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:40.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.579 --rc genhtml_branch_coverage=1 00:09:40.579 --rc genhtml_function_coverage=1 00:09:40.579 --rc genhtml_legend=1 00:09:40.579 --rc geninfo_all_blocks=1 00:09:40.579 --rc geninfo_unexecuted_blocks=1 00:09:40.579 00:09:40.579 ' 00:09:40.579 19:29:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:09:40.579 19:29:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:09:40.579 19:29:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:40.579 19:29:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:09:40.579 19:29:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:40.579 19:29:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:40.579 19:29:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:40.579 19:29:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:40.579 19:29:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:40.579 19:29:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:40.579 19:29:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:40.579 19:29:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:40.579 19:29:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:40.579 19:29:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:40.579 19:29:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:09:40.579 19:29:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:09:40.579 19:29:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:40.579 19:29:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:40.579 19:29:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:40.579 19:29:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:40.579 19:29:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:40.579 19:29:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:09:40.579 19:29:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:40.579 19:29:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:40.579 19:29:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:40.579 19:29:33 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.579 19:29:33 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.579 19:29:33 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.579 19:29:33 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:09:40.579 19:29:33 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.579 19:29:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:09:40.579 19:29:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:40.579 19:29:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:40.579 19:29:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:40.579 19:29:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:40.579 19:29:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:40.579 19:29:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:40.579 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:40.579 19:29:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:40.579 19:29:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:40.579 19:29:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:40.579 19:29:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:09:40.579 19:29:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:09:40.579 19:29:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 1 -eq 0 ]] 00:09:40.579 19:29:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:09:40.579 19:29:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:40.579 19:29:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:40.579 19:29:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:40.579 ************************************ 00:09:40.579 START TEST nvmf_host_management 00:09:40.579 ************************************ 00:09:40.579 19:29:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:09:40.840 * Looking for test storage... 00:09:40.840 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:40.840 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:40.840 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:09:40.840 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:40.840 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:40.840 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:40.840 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:40.840 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:40.840 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:09:40.840 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:09:40.840 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:09:40.840 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:09:40.840 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:09:40.840 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:09:40.840 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:09:40.840 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:40.840 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:09:40.840 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:09:40.840 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:40.840 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:40.840 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:09:40.840 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:09:40.840 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:40.840 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:09:40.840 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:09:40.840 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:09:40.840 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:09:40.840 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:40.840 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:09:40.840 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:09:40.840 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:40.840 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:40.840 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:09:40.840 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:40.840 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:40.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.840 --rc genhtml_branch_coverage=1 00:09:40.840 --rc genhtml_function_coverage=1 00:09:40.840 --rc genhtml_legend=1 00:09:40.840 --rc geninfo_all_blocks=1 00:09:40.840 --rc geninfo_unexecuted_blocks=1 00:09:40.840 00:09:40.840 ' 00:09:40.840 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:40.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.840 --rc genhtml_branch_coverage=1 00:09:40.840 --rc genhtml_function_coverage=1 00:09:40.840 --rc genhtml_legend=1 00:09:40.840 --rc geninfo_all_blocks=1 00:09:40.840 --rc geninfo_unexecuted_blocks=1 00:09:40.840 00:09:40.840 ' 00:09:40.840 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:40.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.840 --rc genhtml_branch_coverage=1 00:09:40.840 --rc genhtml_function_coverage=1 00:09:40.840 --rc genhtml_legend=1 00:09:40.840 --rc geninfo_all_blocks=1 00:09:40.840 --rc geninfo_unexecuted_blocks=1 00:09:40.840 00:09:40.840 ' 00:09:40.840 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:40.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.840 --rc genhtml_branch_coverage=1 00:09:40.840 --rc genhtml_function_coverage=1 00:09:40.840 --rc genhtml_legend=1 00:09:40.840 --rc geninfo_all_blocks=1 00:09:40.840 --rc geninfo_unexecuted_blocks=1 00:09:40.840 00:09:40.840 ' 00:09:40.840 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:40.840 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:09:40.840 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:40.840 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:40.840 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:40.840 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:40.840 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:40.840 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:40.840 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:40.840 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:40.840 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:40.840 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:40.840 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:09:40.840 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:09:40.840 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:40.840 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:40.840 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:40.840 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:40.840 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:40.840 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:09:40.840 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:40.840 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:40.840 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:40.840 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.840 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.840 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.840 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:09:40.841 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.841 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:09:40.841 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:40.841 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:40.841 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:40.841 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:40.841 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:40.841 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:40.841 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:40.841 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:40.841 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:40.841 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:40.841 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:40.841 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:40.841 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:09:40.841 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:40.841 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:40.841 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:40.841 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:40.841 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:40.841 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:40.841 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:40.841 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:40.841 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:40.841 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:40.841 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:40.841 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:40.841 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:40.841 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:40.841 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:40.841 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:40.841 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:40.841 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:40.841 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:40.841 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:40.841 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:40.841 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:40.841 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:40.841 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:40.841 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:40.841 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:40.841 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:40.841 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:40.841 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:40.841 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:40.841 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:40.841 Cannot find device "nvmf_init_br" 00:09:40.841 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:09:40.841 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:40.841 Cannot find device "nvmf_init_br2" 00:09:40.841 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:09:40.841 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:40.841 Cannot find device "nvmf_tgt_br" 00:09:40.841 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # true 00:09:40.841 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:40.841 Cannot find device "nvmf_tgt_br2" 00:09:40.841 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # true 00:09:40.841 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:40.841 Cannot find device "nvmf_init_br" 00:09:40.841 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # true 00:09:40.841 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:40.841 Cannot find device "nvmf_init_br2" 00:09:40.841 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # true 00:09:40.841 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:41.100 Cannot find device "nvmf_tgt_br" 00:09:41.100 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # true 00:09:41.100 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:41.100 Cannot find device "nvmf_tgt_br2" 00:09:41.100 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # true 00:09:41.100 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:41.100 Cannot find device "nvmf_br" 00:09:41.100 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # true 00:09:41.100 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:41.100 Cannot find device "nvmf_init_if" 00:09:41.100 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # true 00:09:41.100 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:41.100 Cannot find device "nvmf_init_if2" 00:09:41.100 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # true 00:09:41.100 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:41.100 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:41.100 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # true 00:09:41.100 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:41.100 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:41.100 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # true 00:09:41.100 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:41.101 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:41.101 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:41.101 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:41.101 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:41.101 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:41.101 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:41.101 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:41.101 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:41.101 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:41.101 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:41.101 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:41.101 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:41.101 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:41.101 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:41.101 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:41.101 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:41.101 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:41.101 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:41.101 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:41.101 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:41.360 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:41.360 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:41.360 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:41.360 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:41.360 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:41.360 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:41.360 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:41.360 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:41.360 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:41.360 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:41.360 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:41.360 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:41.360 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:41.360 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.108 ms 00:09:41.360 00:09:41.360 --- 10.0.0.3 ping statistics --- 00:09:41.360 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:41.360 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:09:41.360 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:41.360 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:41.360 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.079 ms 00:09:41.360 00:09:41.360 --- 10.0.0.4 ping statistics --- 00:09:41.360 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:41.360 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:09:41.360 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:41.360 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:41.360 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms 00:09:41.360 00:09:41.360 --- 10.0.0.1 ping statistics --- 00:09:41.360 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:41.360 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:09:41.360 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:41.360 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:41.360 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:09:41.360 00:09:41.360 --- 10.0.0.2 ping statistics --- 00:09:41.360 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:41.360 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:09:41.360 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:41.360 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@461 -- # return 0 00:09:41.360 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:41.360 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:41.360 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:41.360 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:41.360 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:41.360 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:41.360 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:41.360 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:09:41.360 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:09:41.360 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:09:41.360 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:41.360 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:41.360 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:41.360 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=62361 00:09:41.360 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:09:41.360 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 62361 00:09:41.360 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 62361 ']' 00:09:41.360 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:41.360 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:41.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:41.360 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:41.360 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:41.360 19:29:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:41.619 [2024-12-05 19:29:34.818095] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:09:41.619 [2024-12-05 19:29:34.818270] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:41.619 [2024-12-05 19:29:34.971411] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:41.619 [2024-12-05 19:29:35.041464] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:41.619 [2024-12-05 19:29:35.041542] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:41.619 [2024-12-05 19:29:35.041569] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:41.619 [2024-12-05 19:29:35.041579] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:41.619 [2024-12-05 19:29:35.041588] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:41.619 [2024-12-05 19:29:35.042925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:41.619 [2024-12-05 19:29:35.043041] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:41.619 [2024-12-05 19:29:35.043171] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:41.619 [2024-12-05 19:29:35.043172] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:41.876 [2024-12-05 19:29:35.102021] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:41.876 19:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:41.876 19:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:09:41.876 19:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:41.876 19:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:41.876 19:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:41.876 19:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:41.876 19:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:41.876 19:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.876 19:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:41.876 [2024-12-05 19:29:35.211846] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:41.876 19:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.876 19:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:09:41.876 19:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:41.876 19:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:41.876 19:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:09:41.876 19:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:09:41.876 19:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:09:41.876 19:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.876 19:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:41.876 Malloc0 00:09:41.876 [2024-12-05 19:29:35.299051] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:41.876 19:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.876 19:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:09:41.876 19:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:41.876 19:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:42.133 19:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=62402 00:09:42.133 19:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 62402 /var/tmp/bdevperf.sock 00:09:42.133 19:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 62402 ']' 00:09:42.133 19:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:42.133 19:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:42.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:42.133 19:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:42.133 19:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:42.133 19:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:42.133 19:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:09:42.133 19:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:09:42.133 19:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:09:42.133 19:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:09:42.133 19:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:42.133 19:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:42.133 { 00:09:42.133 "params": { 00:09:42.133 "name": "Nvme$subsystem", 00:09:42.133 "trtype": "$TEST_TRANSPORT", 00:09:42.133 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:42.133 "adrfam": "ipv4", 00:09:42.133 "trsvcid": "$NVMF_PORT", 00:09:42.133 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:42.133 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:42.133 "hdgst": ${hdgst:-false}, 00:09:42.133 "ddgst": ${ddgst:-false} 00:09:42.133 }, 00:09:42.133 "method": "bdev_nvme_attach_controller" 00:09:42.133 } 00:09:42.133 EOF 00:09:42.133 )") 00:09:42.133 19:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:09:42.133 19:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:09:42.133 19:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:09:42.133 19:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:42.133 "params": { 00:09:42.133 "name": "Nvme0", 00:09:42.133 "trtype": "tcp", 00:09:42.133 "traddr": "10.0.0.3", 00:09:42.133 "adrfam": "ipv4", 00:09:42.133 "trsvcid": "4420", 00:09:42.133 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:42.133 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:09:42.133 "hdgst": false, 00:09:42.133 "ddgst": false 00:09:42.133 }, 00:09:42.133 "method": "bdev_nvme_attach_controller" 00:09:42.133 }' 00:09:42.133 [2024-12-05 19:29:35.400282] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:09:42.134 [2024-12-05 19:29:35.400372] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62402 ] 00:09:42.134 [2024-12-05 19:29:35.555180] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:42.391 [2024-12-05 19:29:35.634627] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:42.391 [2024-12-05 19:29:35.700292] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:42.391 Running I/O for 10 seconds... 00:09:43.370 19:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:43.370 19:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:09:43.370 19:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:09:43.370 19:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.370 19:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:43.370 19:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.370 19:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:43.370 19:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:09:43.370 19:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:09:43.370 19:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:09:43.370 19:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:09:43.370 19:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:09:43.370 19:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:09:43.370 19:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:09:43.370 19:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:09:43.370 19:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:09:43.370 19:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.370 19:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:43.370 19:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.370 19:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=1091 00:09:43.370 19:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 1091 -ge 100 ']' 00:09:43.370 19:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:09:43.370 19:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:09:43.370 19:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:09:43.370 19:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:09:43.370 19:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.370 19:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:43.370 19:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.370 19:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:09:43.370 19:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.370 19:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:43.370 19:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.370 19:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:09:43.370 [2024-12-05 19:29:36.629717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:43.370 [2024-12-05 19:29:36.629765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:43.370 [2024-12-05 19:29:36.629790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:43.370 [2024-12-05 19:29:36.629801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:43.370 [2024-12-05 19:29:36.629812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:43.370 [2024-12-05 19:29:36.629822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:43.370 [2024-12-05 19:29:36.629833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:43.370 [2024-12-05 19:29:36.629842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:43.371 [2024-12-05 19:29:36.629854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:43.371 [2024-12-05 19:29:36.629863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:43.371 [2024-12-05 19:29:36.629874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:43.371 [2024-12-05 19:29:36.629883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:43.371 [2024-12-05 19:29:36.629893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:43.371 [2024-12-05 19:29:36.629902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:43.371 [2024-12-05 19:29:36.629914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:43.371 [2024-12-05 19:29:36.629923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:43.371 [2024-12-05 19:29:36.629934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:43.371 [2024-12-05 19:29:36.629943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:43.371 [2024-12-05 19:29:36.629954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:43.371 [2024-12-05 19:29:36.629972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:43.371 [2024-12-05 19:29:36.629984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:43.371 [2024-12-05 19:29:36.629993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:43.371 [2024-12-05 19:29:36.630004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:43.371 [2024-12-05 19:29:36.630012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:43.371 [2024-12-05 19:29:36.630024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:43.371 [2024-12-05 19:29:36.630032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:43.371 [2024-12-05 19:29:36.630043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:43.371 [2024-12-05 19:29:36.630052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:43.371 [2024-12-05 19:29:36.630081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:43.371 [2024-12-05 19:29:36.630092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:43.371 [2024-12-05 19:29:36.630103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:43.371 [2024-12-05 19:29:36.630125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:43.371 [2024-12-05 19:29:36.630138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:43.371 [2024-12-05 19:29:36.630147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:43.371 [2024-12-05 19:29:36.630158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:43.371 [2024-12-05 19:29:36.630167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:43.371 [2024-12-05 19:29:36.630178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:43.371 [2024-12-05 19:29:36.630187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:43.371 [2024-12-05 19:29:36.630198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:43.371 [2024-12-05 19:29:36.630207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:43.371 [2024-12-05 19:29:36.630218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:43.371 [2024-12-05 19:29:36.630227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:43.371 [2024-12-05 19:29:36.630238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:43.371 [2024-12-05 19:29:36.630247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:43.371 [2024-12-05 19:29:36.630258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:43.371 [2024-12-05 19:29:36.630267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:43.371 [2024-12-05 19:29:36.630278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:43.371 [2024-12-05 19:29:36.630287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:43.371 [2024-12-05 19:29:36.630297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:43.371 [2024-12-05 19:29:36.630306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:43.371 [2024-12-05 19:29:36.630317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:43.371 [2024-12-05 19:29:36.630331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:43.371 [2024-12-05 19:29:36.630342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:43.371 [2024-12-05 19:29:36.630351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:43.371 [2024-12-05 19:29:36.630362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:43.371 [2024-12-05 19:29:36.630371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:43.371 [2024-12-05 19:29:36.630382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:43.371 [2024-12-05 19:29:36.630391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:43.371 [2024-12-05 19:29:36.630402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:43.371 [2024-12-05 19:29:36.630411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:43.371 [2024-12-05 19:29:36.630427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:43.371 [2024-12-05 19:29:36.630436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:43.371 [2024-12-05 19:29:36.630447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:43.371 [2024-12-05 19:29:36.630455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:43.371 [2024-12-05 19:29:36.630466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:43.371 [2024-12-05 19:29:36.630475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:43.371 [2024-12-05 19:29:36.630487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:43.371 [2024-12-05 19:29:36.630495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:43.371 [2024-12-05 19:29:36.630506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:43.371 [2024-12-05 19:29:36.630515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:43.371 [2024-12-05 19:29:36.630526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:43.371 [2024-12-05 19:29:36.630535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:43.371 [2024-12-05 19:29:36.630546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:43.371 [2024-12-05 19:29:36.630555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:43.371 [2024-12-05 19:29:36.630566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:43.371 [2024-12-05 19:29:36.630575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:43.371 [2024-12-05 19:29:36.630586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:43.371 [2024-12-05 19:29:36.630594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:43.371 [2024-12-05 19:29:36.630605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:43.371 [2024-12-05 19:29:36.630614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:43.371 [2024-12-05 19:29:36.630625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:43.371 [2024-12-05 19:29:36.630634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:43.371 [2024-12-05 19:29:36.630644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:43.371 [2024-12-05 19:29:36.630658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:43.371 [2024-12-05 19:29:36.630669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:43.371 [2024-12-05 19:29:36.630678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:43.371 [2024-12-05 19:29:36.630688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:43.372 [2024-12-05 19:29:36.630697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:43.372 [2024-12-05 19:29:36.630708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:43.372 [2024-12-05 19:29:36.630717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:43.372 [2024-12-05 19:29:36.630728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:43.372 [2024-12-05 19:29:36.630737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:43.372 [2024-12-05 19:29:36.630752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:43.372 [2024-12-05 19:29:36.630761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:43.372 [2024-12-05 19:29:36.630772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:43.372 [2024-12-05 19:29:36.630781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:43.372 [2024-12-05 19:29:36.630792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:43.372 [2024-12-05 19:29:36.630801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:43.372 [2024-12-05 19:29:36.630811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:43.372 [2024-12-05 19:29:36.630820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:43.372 [2024-12-05 19:29:36.630831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:43.372 [2024-12-05 19:29:36.630841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:43.372 [2024-12-05 19:29:36.630851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:43.372 [2024-12-05 19:29:36.630860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:43.372 [2024-12-05 19:29:36.630871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:43.372 [2024-12-05 19:29:36.630879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:43.372 [2024-12-05 19:29:36.630890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:43.372 [2024-12-05 19:29:36.630899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:43.372 [2024-12-05 19:29:36.630910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:43.372 [2024-12-05 19:29:36.630919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:43.372 [2024-12-05 19:29:36.630930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:43.372 [2024-12-05 19:29:36.630939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:43.372 [2024-12-05 19:29:36.630950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:43.372 [2024-12-05 19:29:36.630959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:43.372 [2024-12-05 19:29:36.630970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:43.372 [2024-12-05 19:29:36.630983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:43.372 [2024-12-05 19:29:36.630993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:43.372 [2024-12-05 19:29:36.631003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:43.372 [2024-12-05 19:29:36.631014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:43.372 [2024-12-05 19:29:36.631023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:43.372 [2024-12-05 19:29:36.631034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:43.372 [2024-12-05 19:29:36.631043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:43.372 [2024-12-05 19:29:36.631054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:43.372 [2024-12-05 19:29:36.631063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:43.372 [2024-12-05 19:29:36.631079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:43.372 [2024-12-05 19:29:36.631088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:43.372 [2024-12-05 19:29:36.631099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:43.372 [2024-12-05 19:29:36.631108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:43.372 [2024-12-05 19:29:36.631128] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12512d0 is same with the state(6) to be set 00:09:43.372 [2024-12-05 19:29:36.631294] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:09:43.372 [2024-12-05 19:29:36.631322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:43.372 [2024-12-05 19:29:36.631334] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:09:43.372 [2024-12-05 19:29:36.631342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:43.372 [2024-12-05 19:29:36.631352] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:09:43.372 [2024-12-05 19:29:36.631361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:43.372 [2024-12-05 19:29:36.631371] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:09:43.372 [2024-12-05 19:29:36.631379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:43.372 [2024-12-05 19:29:36.631388] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1256ce0 is same with the state(6) to be set 00:09:43.372 [2024-12-05 19:29:36.632485] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:09:43.372 task offset: 24576 on job bdev=Nvme0n1 fails 00:09:43.372 00:09:43.372 Latency(us) 00:09:43.372 [2024-12-05T19:29:36.809Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:43.372 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:09:43.372 Job: Nvme0n1 ended in about 0.81 seconds with error 00:09:43.372 Verification LBA range: start 0x0 length 0x400 00:09:43.372 Nvme0n1 : 0.81 1504.23 94.01 79.17 0.00 39527.98 2129.92 39559.91 00:09:43.372 [2024-12-05T19:29:36.809Z] =================================================================================================================== 00:09:43.372 [2024-12-05T19:29:36.809Z] Total : 1504.23 94.01 79.17 0.00 39527.98 2129.92 39559.91 00:09:43.372 [2024-12-05 19:29:36.634429] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:43.372 [2024-12-05 19:29:36.634456] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1256ce0 (9): Bad file descriptor 00:09:43.372 [2024-12-05 19:29:36.641568] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:09:44.310 19:29:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 62402 00:09:44.310 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (62402) - No such process 00:09:44.310 19:29:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:09:44.310 19:29:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:09:44.310 19:29:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:09:44.310 19:29:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:09:44.310 19:29:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:09:44.310 19:29:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:09:44.310 19:29:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:44.310 19:29:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:44.310 { 00:09:44.310 "params": { 00:09:44.310 "name": "Nvme$subsystem", 00:09:44.310 "trtype": "$TEST_TRANSPORT", 00:09:44.310 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:44.310 "adrfam": "ipv4", 00:09:44.310 "trsvcid": "$NVMF_PORT", 00:09:44.310 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:44.310 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:44.310 "hdgst": ${hdgst:-false}, 00:09:44.310 "ddgst": ${ddgst:-false} 00:09:44.310 }, 00:09:44.310 "method": "bdev_nvme_attach_controller" 00:09:44.310 } 00:09:44.310 EOF 00:09:44.310 )") 00:09:44.310 19:29:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:09:44.310 19:29:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:09:44.310 19:29:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:09:44.310 19:29:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:44.310 "params": { 00:09:44.310 "name": "Nvme0", 00:09:44.310 "trtype": "tcp", 00:09:44.310 "traddr": "10.0.0.3", 00:09:44.310 "adrfam": "ipv4", 00:09:44.310 "trsvcid": "4420", 00:09:44.310 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:44.310 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:09:44.310 "hdgst": false, 00:09:44.310 "ddgst": false 00:09:44.310 }, 00:09:44.310 "method": "bdev_nvme_attach_controller" 00:09:44.310 }' 00:09:44.310 [2024-12-05 19:29:37.692365] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:09:44.310 [2024-12-05 19:29:37.692483] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62444 ] 00:09:44.569 [2024-12-05 19:29:37.842889] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:44.569 [2024-12-05 19:29:37.901902] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:44.569 [2024-12-05 19:29:37.963976] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:44.828 Running I/O for 1 seconds... 00:09:45.764 1536.00 IOPS, 96.00 MiB/s 00:09:45.764 Latency(us) 00:09:45.764 [2024-12-05T19:29:39.201Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:45.764 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:09:45.764 Verification LBA range: start 0x0 length 0x400 00:09:45.764 Nvme0n1 : 1.03 1553.38 97.09 0.00 0.00 40395.71 4289.63 38606.66 00:09:45.764 [2024-12-05T19:29:39.201Z] =================================================================================================================== 00:09:45.764 [2024-12-05T19:29:39.201Z] Total : 1553.38 97.09 0.00 0.00 40395.71 4289.63 38606.66 00:09:46.022 19:29:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:09:46.022 19:29:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:09:46.022 19:29:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:09:46.022 19:29:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:09:46.022 19:29:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:09:46.022 19:29:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:46.022 19:29:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:09:46.022 19:29:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:46.022 19:29:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:09:46.022 19:29:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:46.022 19:29:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:46.022 rmmod nvme_tcp 00:09:46.022 rmmod nvme_fabrics 00:09:46.022 rmmod nvme_keyring 00:09:46.022 19:29:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:46.022 19:29:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:09:46.022 19:29:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:09:46.022 19:29:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 62361 ']' 00:09:46.022 19:29:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 62361 00:09:46.022 19:29:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 62361 ']' 00:09:46.022 19:29:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 62361 00:09:46.022 19:29:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:09:46.022 19:29:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:46.022 19:29:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62361 00:09:46.280 19:29:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:46.280 19:29:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:46.280 killing process with pid 62361 00:09:46.280 19:29:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62361' 00:09:46.280 19:29:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 62361 00:09:46.280 19:29:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 62361 00:09:46.280 [2024-12-05 19:29:39.665665] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:09:46.280 19:29:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:46.280 19:29:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:46.280 19:29:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:46.280 19:29:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:09:46.280 19:29:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:09:46.280 19:29:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:09:46.280 19:29:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:46.280 19:29:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:46.280 19:29:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:46.280 19:29:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:46.539 19:29:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:46.539 19:29:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:46.539 19:29:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:46.539 19:29:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:46.539 19:29:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:46.539 19:29:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:46.539 19:29:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:46.539 19:29:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:46.539 19:29:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:46.539 19:29:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:46.539 19:29:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:46.539 19:29:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:46.539 19:29:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:46.539 19:29:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:46.539 19:29:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:46.539 19:29:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:46.539 19:29:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@300 -- # return 0 00:09:46.539 19:29:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:09:46.539 00:09:46.539 real 0m5.981s 00:09:46.539 user 0m21.531s 00:09:46.539 sys 0m1.667s 00:09:46.539 19:29:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:46.539 19:29:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:46.539 ************************************ 00:09:46.539 END TEST nvmf_host_management 00:09:46.539 ************************************ 00:09:46.798 19:29:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:09:46.798 19:29:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:46.798 19:29:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:46.798 19:29:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:46.798 ************************************ 00:09:46.798 START TEST nvmf_lvol 00:09:46.798 ************************************ 00:09:46.798 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:09:46.798 * Looking for test storage... 00:09:46.798 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:46.798 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:46.798 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:09:46.798 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:46.798 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:46.798 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:46.798 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:46.798 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:46.798 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:09:46.798 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:09:46.798 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:09:46.798 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:09:46.798 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:09:46.798 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:09:46.798 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:09:46.798 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:46.798 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:09:46.798 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:09:46.798 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:46.798 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:46.798 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:09:46.798 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:09:46.798 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:46.798 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:09:46.798 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:09:46.798 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:09:46.798 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:09:46.798 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:46.798 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:09:46.798 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:09:46.799 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:46.799 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:46.799 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:09:46.799 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:46.799 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:46.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.799 --rc genhtml_branch_coverage=1 00:09:46.799 --rc genhtml_function_coverage=1 00:09:46.799 --rc genhtml_legend=1 00:09:46.799 --rc geninfo_all_blocks=1 00:09:46.799 --rc geninfo_unexecuted_blocks=1 00:09:46.799 00:09:46.799 ' 00:09:46.799 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:46.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.799 --rc genhtml_branch_coverage=1 00:09:46.799 --rc genhtml_function_coverage=1 00:09:46.799 --rc genhtml_legend=1 00:09:46.799 --rc geninfo_all_blocks=1 00:09:46.799 --rc geninfo_unexecuted_blocks=1 00:09:46.799 00:09:46.799 ' 00:09:46.799 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:46.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.799 --rc genhtml_branch_coverage=1 00:09:46.799 --rc genhtml_function_coverage=1 00:09:46.799 --rc genhtml_legend=1 00:09:46.799 --rc geninfo_all_blocks=1 00:09:46.799 --rc geninfo_unexecuted_blocks=1 00:09:46.799 00:09:46.799 ' 00:09:46.799 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:46.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.799 --rc genhtml_branch_coverage=1 00:09:46.799 --rc genhtml_function_coverage=1 00:09:46.799 --rc genhtml_legend=1 00:09:46.799 --rc geninfo_all_blocks=1 00:09:46.799 --rc geninfo_unexecuted_blocks=1 00:09:46.799 00:09:46.799 ' 00:09:46.799 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:46.799 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:09:46.799 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:46.799 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:46.799 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:46.799 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:46.799 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:46.799 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:46.799 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:46.799 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:46.799 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:46.799 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:46.799 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:09:46.799 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:09:46.799 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:46.799 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:46.799 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:46.799 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:46.799 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:46.799 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:09:46.799 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:46.799 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:46.799 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:46.799 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.799 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.799 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.799 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:09:46.799 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.799 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:09:46.799 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:46.799 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:46.799 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:46.799 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:46.799 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:46.799 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:46.799 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:46.799 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:46.799 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:46.799 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:46.799 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:46.799 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:46.799 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:09:46.799 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:09:46.799 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:46.800 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:09:46.800 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:46.800 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:46.800 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:46.800 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:46.800 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:46.800 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:46.800 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:46.800 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:47.059 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:47.059 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:47.059 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:47.059 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:47.059 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:47.059 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:47.059 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:47.059 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:47.059 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:47.059 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:47.059 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:47.059 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:47.059 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:47.059 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:47.059 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:47.059 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:47.059 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:47.059 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:47.059 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:47.059 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:47.059 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:47.059 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:47.059 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:47.059 Cannot find device "nvmf_init_br" 00:09:47.059 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:09:47.059 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:47.059 Cannot find device "nvmf_init_br2" 00:09:47.059 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:09:47.059 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:47.059 Cannot find device "nvmf_tgt_br" 00:09:47.059 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # true 00:09:47.059 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:47.059 Cannot find device "nvmf_tgt_br2" 00:09:47.059 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # true 00:09:47.059 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:47.059 Cannot find device "nvmf_init_br" 00:09:47.059 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # true 00:09:47.059 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:47.059 Cannot find device "nvmf_init_br2" 00:09:47.059 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # true 00:09:47.059 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:47.059 Cannot find device "nvmf_tgt_br" 00:09:47.059 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # true 00:09:47.059 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:47.059 Cannot find device "nvmf_tgt_br2" 00:09:47.059 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # true 00:09:47.059 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:47.059 Cannot find device "nvmf_br" 00:09:47.059 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # true 00:09:47.059 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:47.059 Cannot find device "nvmf_init_if" 00:09:47.059 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # true 00:09:47.059 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:47.059 Cannot find device "nvmf_init_if2" 00:09:47.059 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # true 00:09:47.059 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:47.059 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:47.059 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # true 00:09:47.059 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:47.059 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:47.059 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # true 00:09:47.059 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:47.059 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:47.059 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:47.059 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:47.059 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:47.059 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:47.059 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:47.318 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:47.318 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:47.318 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:47.318 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:47.318 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:47.318 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:47.318 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:47.318 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:47.318 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:47.318 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:47.318 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:47.318 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:47.318 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:47.318 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:47.318 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:47.318 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:47.318 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:47.318 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:47.318 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:47.318 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:47.318 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:47.318 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:47.318 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:47.318 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:47.318 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:47.318 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:47.318 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:47.318 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.077 ms 00:09:47.318 00:09:47.318 --- 10.0.0.3 ping statistics --- 00:09:47.318 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:47.318 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:09:47.318 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:47.318 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:47.318 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.060 ms 00:09:47.318 00:09:47.318 --- 10.0.0.4 ping statistics --- 00:09:47.318 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:47.318 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:09:47.318 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:47.318 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:47.318 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms 00:09:47.318 00:09:47.318 --- 10.0.0.1 ping statistics --- 00:09:47.318 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:47.318 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:09:47.318 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:47.318 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:47.318 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.038 ms 00:09:47.318 00:09:47.318 --- 10.0.0.2 ping statistics --- 00:09:47.318 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:47.318 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:09:47.318 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:47.318 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@461 -- # return 0 00:09:47.318 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:47.318 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:47.318 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:47.318 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:47.318 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:47.318 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:47.318 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:47.318 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:09:47.318 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:47.318 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:47.318 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:47.318 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=62711 00:09:47.318 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 62711 00:09:47.318 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 62711 ']' 00:09:47.318 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:09:47.318 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:47.318 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:47.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:47.319 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:47.319 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:47.319 19:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:47.319 [2024-12-05 19:29:40.731630] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:09:47.319 [2024-12-05 19:29:40.731728] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:47.578 [2024-12-05 19:29:40.888627] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:47.578 [2024-12-05 19:29:40.956824] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:47.578 [2024-12-05 19:29:40.956901] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:47.578 [2024-12-05 19:29:40.956926] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:47.578 [2024-12-05 19:29:40.956936] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:47.578 [2024-12-05 19:29:40.956945] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:47.578 [2024-12-05 19:29:40.958172] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:47.578 [2024-12-05 19:29:40.958223] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:47.578 [2024-12-05 19:29:40.958226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:47.836 [2024-12-05 19:29:41.014891] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:48.439 19:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:48.439 19:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:09:48.439 19:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:48.439 19:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:48.439 19:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:48.439 19:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:48.439 19:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:48.697 [2024-12-05 19:29:41.992256] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:48.697 19:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:48.955 19:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:09:48.955 19:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:49.213 19:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:09:49.213 19:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:09:49.471 19:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:09:50.037 19:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=95c16844-5cf0-4767-adf6-836e9b03a675 00:09:50.037 19:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 95c16844-5cf0-4767-adf6-836e9b03a675 lvol 20 00:09:50.295 19:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=2a7e2faa-f41a-48c0-8e10-8876d4915ee0 00:09:50.295 19:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:50.553 19:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 2a7e2faa-f41a-48c0-8e10-8876d4915ee0 00:09:50.812 19:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:09:51.071 [2024-12-05 19:29:44.411969] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:51.071 19:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:09:51.329 19:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=62792 00:09:51.329 19:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:09:51.329 19:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:09:52.703 19:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 2a7e2faa-f41a-48c0-8e10-8876d4915ee0 MY_SNAPSHOT 00:09:52.703 19:29:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=af14cc40-bf14-4c20-ae3d-f5839e2a3afb 00:09:52.703 19:29:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 2a7e2faa-f41a-48c0-8e10-8876d4915ee0 30 00:09:53.270 19:29:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone af14cc40-bf14-4c20-ae3d-f5839e2a3afb MY_CLONE 00:09:53.528 19:29:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=f8ddba56-57ac-42d2-9b9c-6a0fe0e0ea0b 00:09:53.528 19:29:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate f8ddba56-57ac-42d2-9b9c-6a0fe0e0ea0b 00:09:54.094 19:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 62792 00:10:02.347 Initializing NVMe Controllers 00:10:02.347 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:10:02.347 Controller IO queue size 128, less than required. 00:10:02.347 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:02.347 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:10:02.347 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:10:02.347 Initialization complete. Launching workers. 00:10:02.347 ======================================================== 00:10:02.347 Latency(us) 00:10:02.347 Device Information : IOPS MiB/s Average min max 00:10:02.347 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10709.80 41.84 11954.51 2139.97 52075.37 00:10:02.347 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10754.70 42.01 11905.21 587.97 90267.10 00:10:02.347 ======================================================== 00:10:02.347 Total : 21464.50 83.85 11929.81 587.97 90267.10 00:10:02.347 00:10:02.347 19:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:02.347 19:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 2a7e2faa-f41a-48c0-8e10-8876d4915ee0 00:10:02.347 19:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 95c16844-5cf0-4767-adf6-836e9b03a675 00:10:02.606 19:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:10:02.606 19:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:10:02.606 19:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:10:02.606 19:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:02.606 19:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:10:02.606 19:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:02.606 19:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:10:02.606 19:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:02.606 19:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:02.606 rmmod nvme_tcp 00:10:02.606 rmmod nvme_fabrics 00:10:02.606 rmmod nvme_keyring 00:10:02.865 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:02.865 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:10:02.865 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:10:02.865 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 62711 ']' 00:10:02.865 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 62711 00:10:02.865 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 62711 ']' 00:10:02.865 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 62711 00:10:02.865 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:10:02.865 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:02.865 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62711 00:10:02.865 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:02.865 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:02.865 killing process with pid 62711 00:10:02.865 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62711' 00:10:02.865 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 62711 00:10:02.865 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 62711 00:10:03.124 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:03.124 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:03.124 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:03.124 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:10:03.124 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:10:03.124 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:03.124 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:10:03.124 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:03.124 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:03.124 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:03.124 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:03.124 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:03.124 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:03.124 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:03.124 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:03.124 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:03.124 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:03.124 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:03.124 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:03.124 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:03.124 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:03.124 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:03.383 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:03.383 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:03.383 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:03.383 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:03.383 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@300 -- # return 0 00:10:03.383 ************************************ 00:10:03.383 END TEST nvmf_lvol 00:10:03.383 ************************************ 00:10:03.383 00:10:03.383 real 0m16.608s 00:10:03.383 user 1m7.466s 00:10:03.383 sys 0m4.532s 00:10:03.383 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:03.383 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:03.383 19:29:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:10:03.383 19:29:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:03.383 19:29:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:03.383 19:29:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:03.383 ************************************ 00:10:03.383 START TEST nvmf_lvs_grow 00:10:03.383 ************************************ 00:10:03.383 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:10:03.383 * Looking for test storage... 00:10:03.383 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:03.383 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:03.383 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:10:03.383 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:03.643 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:03.643 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:03.643 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:03.643 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:03.643 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:10:03.643 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:10:03.643 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:10:03.643 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:10:03.643 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:10:03.643 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:10:03.643 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:10:03.643 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:03.643 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:10:03.643 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:10:03.643 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:03.643 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:03.643 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:10:03.643 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:10:03.643 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:03.643 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:10:03.643 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:10:03.643 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:10:03.643 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:10:03.643 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:03.643 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:10:03.643 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:10:03.643 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:03.643 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:03.643 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:10:03.643 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:03.643 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:03.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:03.643 --rc genhtml_branch_coverage=1 00:10:03.643 --rc genhtml_function_coverage=1 00:10:03.643 --rc genhtml_legend=1 00:10:03.643 --rc geninfo_all_blocks=1 00:10:03.643 --rc geninfo_unexecuted_blocks=1 00:10:03.643 00:10:03.643 ' 00:10:03.643 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:03.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:03.643 --rc genhtml_branch_coverage=1 00:10:03.643 --rc genhtml_function_coverage=1 00:10:03.643 --rc genhtml_legend=1 00:10:03.643 --rc geninfo_all_blocks=1 00:10:03.643 --rc geninfo_unexecuted_blocks=1 00:10:03.643 00:10:03.643 ' 00:10:03.643 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:03.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:03.643 --rc genhtml_branch_coverage=1 00:10:03.643 --rc genhtml_function_coverage=1 00:10:03.643 --rc genhtml_legend=1 00:10:03.643 --rc geninfo_all_blocks=1 00:10:03.643 --rc geninfo_unexecuted_blocks=1 00:10:03.643 00:10:03.643 ' 00:10:03.643 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:03.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:03.643 --rc genhtml_branch_coverage=1 00:10:03.643 --rc genhtml_function_coverage=1 00:10:03.643 --rc genhtml_legend=1 00:10:03.643 --rc geninfo_all_blocks=1 00:10:03.643 --rc geninfo_unexecuted_blocks=1 00:10:03.643 00:10:03.643 ' 00:10:03.643 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:03.643 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:10:03.643 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:03.643 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:03.643 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:03.643 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:03.643 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:03.643 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:03.644 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:03.644 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:03.644 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:03.644 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:03.644 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:10:03.644 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:10:03.644 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:03.644 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:03.644 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:03.644 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:03.644 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:03.644 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:10:03.644 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:03.644 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:03.644 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:03.644 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:03.644 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:03.644 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:03.644 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:10:03.644 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:03.644 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:10:03.644 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:03.644 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:03.644 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:03.644 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:03.644 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:03.644 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:03.644 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:03.644 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:03.644 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:03.644 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:03.644 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:03.644 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:03.644 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:10:03.644 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:03.644 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:03.644 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:03.644 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:03.644 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:03.644 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:03.644 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:03.644 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:03.644 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:03.644 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:03.644 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:03.644 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:03.644 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:03.644 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:03.644 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:03.644 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:03.644 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:03.644 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:03.644 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:03.644 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:03.644 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:03.644 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:03.644 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:03.644 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:03.644 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:03.644 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:03.644 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:03.644 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:03.644 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:03.644 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:03.644 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:03.644 Cannot find device "nvmf_init_br" 00:10:03.644 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:10:03.644 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:03.644 Cannot find device "nvmf_init_br2" 00:10:03.644 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:10:03.644 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:03.644 Cannot find device "nvmf_tgt_br" 00:10:03.644 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # true 00:10:03.644 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:03.644 Cannot find device "nvmf_tgt_br2" 00:10:03.644 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # true 00:10:03.644 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:03.644 Cannot find device "nvmf_init_br" 00:10:03.644 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # true 00:10:03.644 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:03.644 Cannot find device "nvmf_init_br2" 00:10:03.644 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # true 00:10:03.644 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:03.644 Cannot find device "nvmf_tgt_br" 00:10:03.644 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # true 00:10:03.644 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:03.644 Cannot find device "nvmf_tgt_br2" 00:10:03.644 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # true 00:10:03.644 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:03.644 Cannot find device "nvmf_br" 00:10:03.644 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # true 00:10:03.644 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:03.644 Cannot find device "nvmf_init_if" 00:10:03.644 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # true 00:10:03.644 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:03.644 Cannot find device "nvmf_init_if2" 00:10:03.644 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # true 00:10:03.644 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:03.644 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:03.644 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # true 00:10:03.644 19:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:03.644 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:03.645 19:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # true 00:10:03.645 19:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:03.645 19:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:03.645 19:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:03.645 19:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:03.645 19:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:03.645 19:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:03.645 19:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:03.645 19:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:03.645 19:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:03.645 19:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:03.904 19:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:03.904 19:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:03.904 19:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:03.904 19:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:03.904 19:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:03.904 19:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:03.904 19:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:03.904 19:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:03.904 19:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:03.904 19:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:03.904 19:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:03.904 19:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:03.904 19:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:03.904 19:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:03.904 19:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:03.904 19:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:03.904 19:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:03.904 19:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:03.904 19:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:03.904 19:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:03.904 19:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:03.904 19:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:03.904 19:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:03.904 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:03.904 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.104 ms 00:10:03.904 00:10:03.904 --- 10.0.0.3 ping statistics --- 00:10:03.904 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:03.904 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:10:03.904 19:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:03.904 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:03.904 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:10:03.904 00:10:03.904 --- 10.0.0.4 ping statistics --- 00:10:03.904 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:03.904 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:10:03.904 19:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:03.904 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:03.904 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:10:03.904 00:10:03.904 --- 10.0.0.1 ping statistics --- 00:10:03.904 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:03.904 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:10:03.904 19:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:03.904 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:03.904 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:10:03.904 00:10:03.904 --- 10.0.0.2 ping statistics --- 00:10:03.904 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:03.904 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:10:03.904 19:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:03.904 19:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@461 -- # return 0 00:10:03.904 19:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:03.904 19:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:03.904 19:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:03.904 19:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:03.904 19:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:03.904 19:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:03.904 19:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:03.904 19:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:10:03.904 19:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:03.904 19:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:03.904 19:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:03.904 19:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=63176 00:10:03.904 19:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 63176 00:10:03.904 19:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 63176 ']' 00:10:03.904 19:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:03.904 19:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:03.904 19:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:10:03.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:03.904 19:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:03.904 19:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:03.904 19:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:03.904 [2024-12-05 19:29:57.331130] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:10:03.904 [2024-12-05 19:29:57.331242] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:04.164 [2024-12-05 19:29:57.480660] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:04.164 [2024-12-05 19:29:57.532997] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:04.164 [2024-12-05 19:29:57.533063] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:04.164 [2024-12-05 19:29:57.533074] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:04.164 [2024-12-05 19:29:57.533082] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:04.164 [2024-12-05 19:29:57.533088] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:04.164 [2024-12-05 19:29:57.533519] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:04.164 [2024-12-05 19:29:57.589301] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:04.424 19:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:04.424 19:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:10:04.424 19:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:04.424 19:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:04.424 19:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:04.424 19:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:04.424 19:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:04.684 [2024-12-05 19:29:57.996696] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:04.684 19:29:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:10:04.684 19:29:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:04.684 19:29:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:04.684 19:29:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:04.684 ************************************ 00:10:04.684 START TEST lvs_grow_clean 00:10:04.684 ************************************ 00:10:04.684 19:29:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:10:04.684 19:29:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:10:04.684 19:29:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:10:04.684 19:29:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:10:04.684 19:29:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:10:04.684 19:29:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:10:04.684 19:29:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:10:04.684 19:29:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:04.684 19:29:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:04.684 19:29:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:04.944 19:29:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:10:04.944 19:29:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:10:05.513 19:29:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=a4c7f81c-5f30-4ef6-ab2e-f9ec1f4a49b0 00:10:05.513 19:29:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:10:05.513 19:29:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a4c7f81c-5f30-4ef6-ab2e-f9ec1f4a49b0 00:10:05.771 19:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:10:05.771 19:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:10:05.771 19:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u a4c7f81c-5f30-4ef6-ab2e-f9ec1f4a49b0 lvol 150 00:10:06.030 19:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=ebe084de-e048-4019-8867-a65b4170fb55 00:10:06.030 19:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:06.030 19:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:10:06.289 [2024-12-05 19:29:59.525143] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:10:06.289 [2024-12-05 19:29:59.525248] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:10:06.289 true 00:10:06.289 19:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a4c7f81c-5f30-4ef6-ab2e-f9ec1f4a49b0 00:10:06.289 19:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:10:06.548 19:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:10:06.549 19:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:06.807 19:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 ebe084de-e048-4019-8867-a65b4170fb55 00:10:07.066 19:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:10:07.324 [2024-12-05 19:30:00.557816] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:07.324 19:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:10:07.583 19:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:10:07.583 19:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=63252 00:10:07.583 19:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:07.583 19:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 63252 /var/tmp/bdevperf.sock 00:10:07.583 19:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 63252 ']' 00:10:07.583 19:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:07.583 19:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:07.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:07.583 19:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:07.583 19:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:07.583 19:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:10:07.583 [2024-12-05 19:30:00.912849] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:10:07.583 [2024-12-05 19:30:00.912950] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63252 ] 00:10:07.860 [2024-12-05 19:30:01.059830] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:07.860 [2024-12-05 19:30:01.131779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:07.860 [2024-12-05 19:30:01.193934] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:07.860 19:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:07.860 19:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:10:07.860 19:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:10:08.425 Nvme0n1 00:10:08.425 19:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:10:08.683 [ 00:10:08.683 { 00:10:08.683 "name": "Nvme0n1", 00:10:08.683 "aliases": [ 00:10:08.683 "ebe084de-e048-4019-8867-a65b4170fb55" 00:10:08.683 ], 00:10:08.683 "product_name": "NVMe disk", 00:10:08.683 "block_size": 4096, 00:10:08.683 "num_blocks": 38912, 00:10:08.683 "uuid": "ebe084de-e048-4019-8867-a65b4170fb55", 00:10:08.683 "numa_id": -1, 00:10:08.683 "assigned_rate_limits": { 00:10:08.683 "rw_ios_per_sec": 0, 00:10:08.683 "rw_mbytes_per_sec": 0, 00:10:08.683 "r_mbytes_per_sec": 0, 00:10:08.683 "w_mbytes_per_sec": 0 00:10:08.683 }, 00:10:08.683 "claimed": false, 00:10:08.683 "zoned": false, 00:10:08.683 "supported_io_types": { 00:10:08.683 "read": true, 00:10:08.683 "write": true, 00:10:08.683 "unmap": true, 00:10:08.683 "flush": true, 00:10:08.683 "reset": true, 00:10:08.683 "nvme_admin": true, 00:10:08.683 "nvme_io": true, 00:10:08.683 "nvme_io_md": false, 00:10:08.683 "write_zeroes": true, 00:10:08.683 "zcopy": false, 00:10:08.683 "get_zone_info": false, 00:10:08.683 "zone_management": false, 00:10:08.683 "zone_append": false, 00:10:08.683 "compare": true, 00:10:08.683 "compare_and_write": true, 00:10:08.683 "abort": true, 00:10:08.683 "seek_hole": false, 00:10:08.683 "seek_data": false, 00:10:08.683 "copy": true, 00:10:08.683 "nvme_iov_md": false 00:10:08.683 }, 00:10:08.683 "memory_domains": [ 00:10:08.683 { 00:10:08.683 "dma_device_id": "system", 00:10:08.683 "dma_device_type": 1 00:10:08.683 } 00:10:08.683 ], 00:10:08.683 "driver_specific": { 00:10:08.683 "nvme": [ 00:10:08.683 { 00:10:08.683 "trid": { 00:10:08.683 "trtype": "TCP", 00:10:08.683 "adrfam": "IPv4", 00:10:08.683 "traddr": "10.0.0.3", 00:10:08.683 "trsvcid": "4420", 00:10:08.683 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:10:08.683 }, 00:10:08.683 "ctrlr_data": { 00:10:08.683 "cntlid": 1, 00:10:08.683 "vendor_id": "0x8086", 00:10:08.683 "model_number": "SPDK bdev Controller", 00:10:08.683 "serial_number": "SPDK0", 00:10:08.683 "firmware_revision": "25.01", 00:10:08.683 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:08.683 "oacs": { 00:10:08.683 "security": 0, 00:10:08.683 "format": 0, 00:10:08.683 "firmware": 0, 00:10:08.683 "ns_manage": 0 00:10:08.683 }, 00:10:08.683 "multi_ctrlr": true, 00:10:08.683 "ana_reporting": false 00:10:08.683 }, 00:10:08.683 "vs": { 00:10:08.683 "nvme_version": "1.3" 00:10:08.683 }, 00:10:08.683 "ns_data": { 00:10:08.683 "id": 1, 00:10:08.683 "can_share": true 00:10:08.683 } 00:10:08.683 } 00:10:08.683 ], 00:10:08.683 "mp_policy": "active_passive" 00:10:08.683 } 00:10:08.683 } 00:10:08.683 ] 00:10:08.683 19:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:08.683 19:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=63271 00:10:08.683 19:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:10:08.683 Running I/O for 10 seconds... 00:10:09.617 Latency(us) 00:10:09.617 [2024-12-05T19:30:03.054Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:09.617 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:09.617 Nvme0n1 : 1.00 7085.00 27.68 0.00 0.00 0.00 0.00 0.00 00:10:09.617 [2024-12-05T19:30:03.054Z] =================================================================================================================== 00:10:09.617 [2024-12-05T19:30:03.054Z] Total : 7085.00 27.68 0.00 0.00 0.00 0.00 0.00 00:10:09.617 00:10:10.553 19:30:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u a4c7f81c-5f30-4ef6-ab2e-f9ec1f4a49b0 00:10:10.812 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:10.812 Nvme0n1 : 2.00 6908.00 26.98 0.00 0.00 0.00 0.00 0.00 00:10:10.812 [2024-12-05T19:30:04.249Z] =================================================================================================================== 00:10:10.812 [2024-12-05T19:30:04.249Z] Total : 6908.00 26.98 0.00 0.00 0.00 0.00 0.00 00:10:10.812 00:10:11.071 true 00:10:11.071 19:30:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a4c7f81c-5f30-4ef6-ab2e-f9ec1f4a49b0 00:10:11.071 19:30:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:10:11.330 19:30:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:10:11.330 19:30:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:10:11.330 19:30:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 63271 00:10:11.906 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:11.906 Nvme0n1 : 3.00 6933.67 27.08 0.00 0.00 0.00 0.00 0.00 00:10:11.906 [2024-12-05T19:30:05.343Z] =================================================================================================================== 00:10:11.906 [2024-12-05T19:30:05.343Z] Total : 6933.67 27.08 0.00 0.00 0.00 0.00 0.00 00:10:11.906 00:10:12.879 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:12.879 Nvme0n1 : 4.00 6914.75 27.01 0.00 0.00 0.00 0.00 0.00 00:10:12.879 [2024-12-05T19:30:06.316Z] =================================================================================================================== 00:10:12.879 [2024-12-05T19:30:06.316Z] Total : 6914.75 27.01 0.00 0.00 0.00 0.00 0.00 00:10:12.879 00:10:13.817 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:13.817 Nvme0n1 : 5.00 6903.40 26.97 0.00 0.00 0.00 0.00 0.00 00:10:13.817 [2024-12-05T19:30:07.254Z] =================================================================================================================== 00:10:13.817 [2024-12-05T19:30:07.254Z] Total : 6903.40 26.97 0.00 0.00 0.00 0.00 0.00 00:10:13.817 00:10:14.753 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:14.753 Nvme0n1 : 6.00 6785.83 26.51 0.00 0.00 0.00 0.00 0.00 00:10:14.753 [2024-12-05T19:30:08.190Z] =================================================================================================================== 00:10:14.753 [2024-12-05T19:30:08.190Z] Total : 6785.83 26.51 0.00 0.00 0.00 0.00 0.00 00:10:14.753 00:10:15.689 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:15.689 Nvme0n1 : 7.00 6814.29 26.62 0.00 0.00 0.00 0.00 0.00 00:10:15.689 [2024-12-05T19:30:09.126Z] =================================================================================================================== 00:10:15.689 [2024-12-05T19:30:09.126Z] Total : 6814.29 26.62 0.00 0.00 0.00 0.00 0.00 00:10:15.689 00:10:16.625 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:16.625 Nvme0n1 : 8.00 6819.75 26.64 0.00 0.00 0.00 0.00 0.00 00:10:16.625 [2024-12-05T19:30:10.062Z] =================================================================================================================== 00:10:16.625 [2024-12-05T19:30:10.062Z] Total : 6819.75 26.64 0.00 0.00 0.00 0.00 0.00 00:10:16.625 00:10:18.003 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:18.004 Nvme0n1 : 9.00 6809.89 26.60 0.00 0.00 0.00 0.00 0.00 00:10:18.004 [2024-12-05T19:30:11.441Z] =================================================================================================================== 00:10:18.004 [2024-12-05T19:30:11.441Z] Total : 6809.89 26.60 0.00 0.00 0.00 0.00 0.00 00:10:18.004 00:10:18.936 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:18.936 Nvme0n1 : 10.00 6789.30 26.52 0.00 0.00 0.00 0.00 0.00 00:10:18.936 [2024-12-05T19:30:12.373Z] =================================================================================================================== 00:10:18.936 [2024-12-05T19:30:12.373Z] Total : 6789.30 26.52 0.00 0.00 0.00 0.00 0.00 00:10:18.936 00:10:18.936 00:10:18.936 Latency(us) 00:10:18.936 [2024-12-05T19:30:12.373Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:18.936 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:18.936 Nvme0n1 : 10.02 6788.03 26.52 0.00 0.00 18851.37 5481.19 121539.49 00:10:18.936 [2024-12-05T19:30:12.373Z] =================================================================================================================== 00:10:18.936 [2024-12-05T19:30:12.373Z] Total : 6788.03 26.52 0.00 0.00 18851.37 5481.19 121539.49 00:10:18.936 { 00:10:18.936 "results": [ 00:10:18.936 { 00:10:18.936 "job": "Nvme0n1", 00:10:18.936 "core_mask": "0x2", 00:10:18.936 "workload": "randwrite", 00:10:18.936 "status": "finished", 00:10:18.936 "queue_depth": 128, 00:10:18.936 "io_size": 4096, 00:10:18.936 "runtime": 10.020721, 00:10:18.936 "iops": 6788.034513684195, 00:10:18.936 "mibps": 26.515759819078887, 00:10:18.936 "io_failed": 0, 00:10:18.936 "io_timeout": 0, 00:10:18.936 "avg_latency_us": 18851.372184686283, 00:10:18.936 "min_latency_us": 5481.192727272727, 00:10:18.936 "max_latency_us": 121539.4909090909 00:10:18.936 } 00:10:18.936 ], 00:10:18.936 "core_count": 1 00:10:18.936 } 00:10:18.936 19:30:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 63252 00:10:18.936 19:30:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 63252 ']' 00:10:18.936 19:30:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 63252 00:10:18.936 19:30:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:10:18.936 19:30:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:18.936 19:30:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63252 00:10:18.936 killing process with pid 63252 00:10:18.936 Received shutdown signal, test time was about 10.000000 seconds 00:10:18.936 00:10:18.936 Latency(us) 00:10:18.936 [2024-12-05T19:30:12.373Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:18.936 [2024-12-05T19:30:12.374Z] =================================================================================================================== 00:10:18.937 [2024-12-05T19:30:12.374Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:18.937 19:30:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:18.937 19:30:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:18.937 19:30:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63252' 00:10:18.937 19:30:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 63252 00:10:18.937 19:30:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 63252 00:10:18.937 19:30:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:10:19.195 19:30:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:19.454 19:30:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a4c7f81c-5f30-4ef6-ab2e-f9ec1f4a49b0 00:10:19.454 19:30:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:10:20.021 19:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:10:20.021 19:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:10:20.021 19:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:20.021 [2024-12-05 19:30:13.427718] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:10:20.280 19:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a4c7f81c-5f30-4ef6-ab2e-f9ec1f4a49b0 00:10:20.280 19:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:10:20.280 19:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a4c7f81c-5f30-4ef6-ab2e-f9ec1f4a49b0 00:10:20.280 19:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:20.280 19:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:20.280 19:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:20.280 19:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:20.280 19:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:20.280 19:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:20.280 19:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:20.280 19:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:10:20.280 19:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a4c7f81c-5f30-4ef6-ab2e-f9ec1f4a49b0 00:10:20.538 request: 00:10:20.538 { 00:10:20.538 "uuid": "a4c7f81c-5f30-4ef6-ab2e-f9ec1f4a49b0", 00:10:20.538 "method": "bdev_lvol_get_lvstores", 00:10:20.538 "req_id": 1 00:10:20.538 } 00:10:20.538 Got JSON-RPC error response 00:10:20.538 response: 00:10:20.538 { 00:10:20.538 "code": -19, 00:10:20.538 "message": "No such device" 00:10:20.538 } 00:10:20.538 19:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:10:20.538 19:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:20.538 19:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:20.538 19:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:20.538 19:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:20.796 aio_bdev 00:10:20.796 19:30:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev ebe084de-e048-4019-8867-a65b4170fb55 00:10:20.796 19:30:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=ebe084de-e048-4019-8867-a65b4170fb55 00:10:20.796 19:30:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:20.796 19:30:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:10:20.796 19:30:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:20.796 19:30:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:20.796 19:30:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:21.054 19:30:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ebe084de-e048-4019-8867-a65b4170fb55 -t 2000 00:10:21.313 [ 00:10:21.313 { 00:10:21.313 "name": "ebe084de-e048-4019-8867-a65b4170fb55", 00:10:21.313 "aliases": [ 00:10:21.313 "lvs/lvol" 00:10:21.313 ], 00:10:21.313 "product_name": "Logical Volume", 00:10:21.313 "block_size": 4096, 00:10:21.313 "num_blocks": 38912, 00:10:21.313 "uuid": "ebe084de-e048-4019-8867-a65b4170fb55", 00:10:21.313 "assigned_rate_limits": { 00:10:21.313 "rw_ios_per_sec": 0, 00:10:21.313 "rw_mbytes_per_sec": 0, 00:10:21.313 "r_mbytes_per_sec": 0, 00:10:21.313 "w_mbytes_per_sec": 0 00:10:21.313 }, 00:10:21.313 "claimed": false, 00:10:21.313 "zoned": false, 00:10:21.313 "supported_io_types": { 00:10:21.313 "read": true, 00:10:21.313 "write": true, 00:10:21.313 "unmap": true, 00:10:21.313 "flush": false, 00:10:21.313 "reset": true, 00:10:21.313 "nvme_admin": false, 00:10:21.313 "nvme_io": false, 00:10:21.313 "nvme_io_md": false, 00:10:21.313 "write_zeroes": true, 00:10:21.313 "zcopy": false, 00:10:21.313 "get_zone_info": false, 00:10:21.313 "zone_management": false, 00:10:21.313 "zone_append": false, 00:10:21.313 "compare": false, 00:10:21.313 "compare_and_write": false, 00:10:21.313 "abort": false, 00:10:21.313 "seek_hole": true, 00:10:21.313 "seek_data": true, 00:10:21.313 "copy": false, 00:10:21.313 "nvme_iov_md": false 00:10:21.313 }, 00:10:21.313 "driver_specific": { 00:10:21.313 "lvol": { 00:10:21.313 "lvol_store_uuid": "a4c7f81c-5f30-4ef6-ab2e-f9ec1f4a49b0", 00:10:21.313 "base_bdev": "aio_bdev", 00:10:21.313 "thin_provision": false, 00:10:21.313 "num_allocated_clusters": 38, 00:10:21.313 "snapshot": false, 00:10:21.313 "clone": false, 00:10:21.313 "esnap_clone": false 00:10:21.313 } 00:10:21.313 } 00:10:21.313 } 00:10:21.313 ] 00:10:21.313 19:30:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:10:21.313 19:30:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a4c7f81c-5f30-4ef6-ab2e-f9ec1f4a49b0 00:10:21.313 19:30:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:10:21.572 19:30:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:10:21.572 19:30:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a4c7f81c-5f30-4ef6-ab2e-f9ec1f4a49b0 00:10:21.572 19:30:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:10:21.835 19:30:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:10:21.835 19:30:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete ebe084de-e048-4019-8867-a65b4170fb55 00:10:22.099 19:30:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a4c7f81c-5f30-4ef6-ab2e-f9ec1f4a49b0 00:10:22.356 19:30:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:22.619 19:30:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:23.184 ************************************ 00:10:23.184 END TEST lvs_grow_clean 00:10:23.184 ************************************ 00:10:23.184 00:10:23.184 real 0m18.288s 00:10:23.184 user 0m17.016s 00:10:23.184 sys 0m2.720s 00:10:23.184 19:30:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:23.184 19:30:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:10:23.184 19:30:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:10:23.184 19:30:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:23.184 19:30:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:23.184 19:30:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:23.184 ************************************ 00:10:23.184 START TEST lvs_grow_dirty 00:10:23.184 ************************************ 00:10:23.184 19:30:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:10:23.184 19:30:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:10:23.184 19:30:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:10:23.184 19:30:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:10:23.184 19:30:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:10:23.184 19:30:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:10:23.184 19:30:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:10:23.184 19:30:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:23.184 19:30:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:23.184 19:30:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:23.442 19:30:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:10:23.442 19:30:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:10:23.700 19:30:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=341b1ab5-802d-4856-8783-0893a60f587f 00:10:23.700 19:30:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 341b1ab5-802d-4856-8783-0893a60f587f 00:10:23.700 19:30:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:10:23.960 19:30:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:10:23.960 19:30:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:10:23.960 19:30:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 341b1ab5-802d-4856-8783-0893a60f587f lvol 150 00:10:24.235 19:30:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=98df53cf-d443-4f93-a13f-2c3fc60011fe 00:10:24.235 19:30:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:24.235 19:30:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:10:24.495 [2024-12-05 19:30:17.858054] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:10:24.495 [2024-12-05 19:30:17.858197] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:10:24.495 true 00:10:24.495 19:30:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:10:24.495 19:30:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 341b1ab5-802d-4856-8783-0893a60f587f 00:10:25.060 19:30:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:10:25.060 19:30:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:25.060 19:30:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 98df53cf-d443-4f93-a13f-2c3fc60011fe 00:10:25.626 19:30:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:10:25.626 [2024-12-05 19:30:19.010770] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:25.626 19:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:10:26.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:26.192 19:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=63526 00:10:26.192 19:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:10:26.192 19:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:26.192 19:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 63526 /var/tmp/bdevperf.sock 00:10:26.192 19:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 63526 ']' 00:10:26.192 19:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:26.192 19:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:26.192 19:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:26.192 19:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:26.192 19:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:26.192 [2024-12-05 19:30:19.382695] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:10:26.192 [2024-12-05 19:30:19.383530] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63526 ] 00:10:26.192 [2024-12-05 19:30:19.530938] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:26.192 [2024-12-05 19:30:19.594029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:26.449 [2024-12-05 19:30:19.656177] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:26.449 19:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:26.449 19:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:10:26.449 19:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:10:26.705 Nvme0n1 00:10:26.705 19:30:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:10:26.962 [ 00:10:26.962 { 00:10:26.962 "name": "Nvme0n1", 00:10:26.962 "aliases": [ 00:10:26.962 "98df53cf-d443-4f93-a13f-2c3fc60011fe" 00:10:26.962 ], 00:10:26.962 "product_name": "NVMe disk", 00:10:26.962 "block_size": 4096, 00:10:26.962 "num_blocks": 38912, 00:10:26.962 "uuid": "98df53cf-d443-4f93-a13f-2c3fc60011fe", 00:10:26.962 "numa_id": -1, 00:10:26.962 "assigned_rate_limits": { 00:10:26.962 "rw_ios_per_sec": 0, 00:10:26.962 "rw_mbytes_per_sec": 0, 00:10:26.962 "r_mbytes_per_sec": 0, 00:10:26.962 "w_mbytes_per_sec": 0 00:10:26.962 }, 00:10:26.962 "claimed": false, 00:10:26.962 "zoned": false, 00:10:26.962 "supported_io_types": { 00:10:26.962 "read": true, 00:10:26.962 "write": true, 00:10:26.962 "unmap": true, 00:10:26.962 "flush": true, 00:10:26.962 "reset": true, 00:10:26.962 "nvme_admin": true, 00:10:26.962 "nvme_io": true, 00:10:26.962 "nvme_io_md": false, 00:10:26.962 "write_zeroes": true, 00:10:26.962 "zcopy": false, 00:10:26.962 "get_zone_info": false, 00:10:26.962 "zone_management": false, 00:10:26.962 "zone_append": false, 00:10:26.962 "compare": true, 00:10:26.962 "compare_and_write": true, 00:10:26.962 "abort": true, 00:10:26.962 "seek_hole": false, 00:10:26.962 "seek_data": false, 00:10:26.962 "copy": true, 00:10:26.962 "nvme_iov_md": false 00:10:26.962 }, 00:10:26.962 "memory_domains": [ 00:10:26.962 { 00:10:26.962 "dma_device_id": "system", 00:10:26.962 "dma_device_type": 1 00:10:26.962 } 00:10:26.962 ], 00:10:26.962 "driver_specific": { 00:10:26.962 "nvme": [ 00:10:26.962 { 00:10:26.962 "trid": { 00:10:26.962 "trtype": "TCP", 00:10:26.962 "adrfam": "IPv4", 00:10:26.962 "traddr": "10.0.0.3", 00:10:26.962 "trsvcid": "4420", 00:10:26.962 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:10:26.962 }, 00:10:26.962 "ctrlr_data": { 00:10:26.962 "cntlid": 1, 00:10:26.962 "vendor_id": "0x8086", 00:10:26.962 "model_number": "SPDK bdev Controller", 00:10:26.962 "serial_number": "SPDK0", 00:10:26.962 "firmware_revision": "25.01", 00:10:26.962 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:26.962 "oacs": { 00:10:26.962 "security": 0, 00:10:26.962 "format": 0, 00:10:26.962 "firmware": 0, 00:10:26.962 "ns_manage": 0 00:10:26.962 }, 00:10:26.962 "multi_ctrlr": true, 00:10:26.962 "ana_reporting": false 00:10:26.962 }, 00:10:26.962 "vs": { 00:10:26.962 "nvme_version": "1.3" 00:10:26.962 }, 00:10:26.962 "ns_data": { 00:10:26.962 "id": 1, 00:10:26.962 "can_share": true 00:10:26.962 } 00:10:26.962 } 00:10:26.962 ], 00:10:26.962 "mp_policy": "active_passive" 00:10:26.962 } 00:10:26.962 } 00:10:26.962 ] 00:10:26.962 19:30:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:26.962 19:30:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=63542 00:10:26.962 19:30:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:10:27.220 Running I/O for 10 seconds... 00:10:28.154 Latency(us) 00:10:28.154 [2024-12-05T19:30:21.591Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:28.154 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:28.154 Nvme0n1 : 1.00 7366.00 28.77 0.00 0.00 0.00 0.00 0.00 00:10:28.154 [2024-12-05T19:30:21.591Z] =================================================================================================================== 00:10:28.154 [2024-12-05T19:30:21.591Z] Total : 7366.00 28.77 0.00 0.00 0.00 0.00 0.00 00:10:28.154 00:10:29.089 19:30:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 341b1ab5-802d-4856-8783-0893a60f587f 00:10:29.089 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:29.089 Nvme0n1 : 2.00 7366.00 28.77 0.00 0.00 0.00 0.00 0.00 00:10:29.089 [2024-12-05T19:30:22.526Z] =================================================================================================================== 00:10:29.089 [2024-12-05T19:30:22.526Z] Total : 7366.00 28.77 0.00 0.00 0.00 0.00 0.00 00:10:29.089 00:10:29.348 true 00:10:29.348 19:30:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 341b1ab5-802d-4856-8783-0893a60f587f 00:10:29.348 19:30:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:10:29.606 19:30:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:10:29.607 19:30:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:10:29.607 19:30:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 63542 00:10:30.175 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:30.175 Nvme0n1 : 3.00 7323.67 28.61 0.00 0.00 0.00 0.00 0.00 00:10:30.175 [2024-12-05T19:30:23.612Z] =================================================================================================================== 00:10:30.175 [2024-12-05T19:30:23.612Z] Total : 7323.67 28.61 0.00 0.00 0.00 0.00 0.00 00:10:30.175 00:10:31.112 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:31.112 Nvme0n1 : 4.00 7207.25 28.15 0.00 0.00 0.00 0.00 0.00 00:10:31.112 [2024-12-05T19:30:24.549Z] =================================================================================================================== 00:10:31.112 [2024-12-05T19:30:24.549Z] Total : 7207.25 28.15 0.00 0.00 0.00 0.00 0.00 00:10:31.112 00:10:32.050 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:32.050 Nvme0n1 : 5.00 7213.60 28.18 0.00 0.00 0.00 0.00 0.00 00:10:32.050 [2024-12-05T19:30:25.487Z] =================================================================================================================== 00:10:32.050 [2024-12-05T19:30:25.487Z] Total : 7213.60 28.18 0.00 0.00 0.00 0.00 0.00 00:10:32.050 00:10:33.454 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:33.454 Nvme0n1 : 6.00 7175.50 28.03 0.00 0.00 0.00 0.00 0.00 00:10:33.454 [2024-12-05T19:30:26.891Z] =================================================================================================================== 00:10:33.454 [2024-12-05T19:30:26.891Z] Total : 7175.50 28.03 0.00 0.00 0.00 0.00 0.00 00:10:33.454 00:10:34.388 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:34.388 Nvme0n1 : 7.00 7106.71 27.76 0.00 0.00 0.00 0.00 0.00 00:10:34.388 [2024-12-05T19:30:27.825Z] =================================================================================================================== 00:10:34.388 [2024-12-05T19:30:27.825Z] Total : 7106.71 27.76 0.00 0.00 0.00 0.00 0.00 00:10:34.388 00:10:35.322 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:35.322 Nvme0n1 : 8.00 7075.62 27.64 0.00 0.00 0.00 0.00 0.00 00:10:35.322 [2024-12-05T19:30:28.759Z] =================================================================================================================== 00:10:35.322 [2024-12-05T19:30:28.759Z] Total : 7075.62 27.64 0.00 0.00 0.00 0.00 0.00 00:10:35.322 00:10:36.254 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:36.254 Nvme0n1 : 9.00 7023.22 27.43 0.00 0.00 0.00 0.00 0.00 00:10:36.254 [2024-12-05T19:30:29.691Z] =================================================================================================================== 00:10:36.254 [2024-12-05T19:30:29.691Z] Total : 7023.22 27.43 0.00 0.00 0.00 0.00 0.00 00:10:36.254 00:10:37.191 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:37.191 Nvme0n1 : 10.00 6981.30 27.27 0.00 0.00 0.00 0.00 0.00 00:10:37.191 [2024-12-05T19:30:30.628Z] =================================================================================================================== 00:10:37.191 [2024-12-05T19:30:30.628Z] Total : 6981.30 27.27 0.00 0.00 0.00 0.00 0.00 00:10:37.191 00:10:37.191 00:10:37.191 Latency(us) 00:10:37.191 [2024-12-05T19:30:30.628Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:37.191 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:37.191 Nvme0n1 : 10.02 6979.33 27.26 0.00 0.00 18334.45 5749.29 71970.44 00:10:37.191 [2024-12-05T19:30:30.628Z] =================================================================================================================== 00:10:37.191 [2024-12-05T19:30:30.628Z] Total : 6979.33 27.26 0.00 0.00 18334.45 5749.29 71970.44 00:10:37.191 { 00:10:37.191 "results": [ 00:10:37.191 { 00:10:37.191 "job": "Nvme0n1", 00:10:37.191 "core_mask": "0x2", 00:10:37.191 "workload": "randwrite", 00:10:37.191 "status": "finished", 00:10:37.191 "queue_depth": 128, 00:10:37.191 "io_size": 4096, 00:10:37.191 "runtime": 10.021161, 00:10:37.191 "iops": 6979.331037591353, 00:10:37.191 "mibps": 27.263011865591224, 00:10:37.191 "io_failed": 0, 00:10:37.191 "io_timeout": 0, 00:10:37.191 "avg_latency_us": 18334.450534021533, 00:10:37.191 "min_latency_us": 5749.294545454545, 00:10:37.191 "max_latency_us": 71970.44363636363 00:10:37.191 } 00:10:37.191 ], 00:10:37.191 "core_count": 1 00:10:37.191 } 00:10:37.191 19:30:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 63526 00:10:37.191 19:30:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 63526 ']' 00:10:37.191 19:30:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 63526 00:10:37.191 19:30:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:10:37.191 19:30:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:37.191 19:30:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63526 00:10:37.191 killing process with pid 63526 00:10:37.191 Received shutdown signal, test time was about 10.000000 seconds 00:10:37.191 00:10:37.191 Latency(us) 00:10:37.191 [2024-12-05T19:30:30.628Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:37.191 [2024-12-05T19:30:30.628Z] =================================================================================================================== 00:10:37.191 [2024-12-05T19:30:30.628Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:37.192 19:30:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:37.192 19:30:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:37.192 19:30:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63526' 00:10:37.192 19:30:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 63526 00:10:37.192 19:30:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 63526 00:10:37.450 19:30:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:10:37.709 19:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:37.968 19:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 341b1ab5-802d-4856-8783-0893a60f587f 00:10:37.968 19:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:10:38.226 19:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:10:38.226 19:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:10:38.226 19:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 63176 00:10:38.226 19:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 63176 00:10:38.580 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 63176 Killed "${NVMF_APP[@]}" "$@" 00:10:38.580 19:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:10:38.580 19:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:10:38.580 19:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:38.580 19:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:38.580 19:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:38.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:38.580 19:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=63679 00:10:38.580 19:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 63679 00:10:38.580 19:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:10:38.580 19:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 63679 ']' 00:10:38.580 19:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:38.580 19:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:38.580 19:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:38.580 19:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:38.580 19:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:38.580 [2024-12-05 19:30:31.754613] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:10:38.580 [2024-12-05 19:30:31.754713] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:38.580 [2024-12-05 19:30:31.900716] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:38.580 [2024-12-05 19:30:31.958955] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:38.580 [2024-12-05 19:30:31.958999] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:38.580 [2024-12-05 19:30:31.959026] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:38.580 [2024-12-05 19:30:31.959034] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:38.580 [2024-12-05 19:30:31.959041] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:38.580 [2024-12-05 19:30:31.959478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:38.856 [2024-12-05 19:30:32.014825] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:39.423 19:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:39.423 19:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:10:39.423 19:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:39.423 19:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:39.423 19:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:39.423 19:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:39.423 19:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:39.683 [2024-12-05 19:30:33.051680] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:10:39.683 [2024-12-05 19:30:33.052136] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:10:39.683 [2024-12-05 19:30:33.052401] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:10:39.683 19:30:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:10:39.683 19:30:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 98df53cf-d443-4f93-a13f-2c3fc60011fe 00:10:39.683 19:30:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=98df53cf-d443-4f93-a13f-2c3fc60011fe 00:10:39.683 19:30:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:39.683 19:30:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:10:39.683 19:30:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:39.683 19:30:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:39.683 19:30:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:39.950 19:30:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 98df53cf-d443-4f93-a13f-2c3fc60011fe -t 2000 00:10:40.209 [ 00:10:40.209 { 00:10:40.209 "name": "98df53cf-d443-4f93-a13f-2c3fc60011fe", 00:10:40.209 "aliases": [ 00:10:40.209 "lvs/lvol" 00:10:40.209 ], 00:10:40.209 "product_name": "Logical Volume", 00:10:40.209 "block_size": 4096, 00:10:40.209 "num_blocks": 38912, 00:10:40.209 "uuid": "98df53cf-d443-4f93-a13f-2c3fc60011fe", 00:10:40.209 "assigned_rate_limits": { 00:10:40.209 "rw_ios_per_sec": 0, 00:10:40.209 "rw_mbytes_per_sec": 0, 00:10:40.209 "r_mbytes_per_sec": 0, 00:10:40.209 "w_mbytes_per_sec": 0 00:10:40.209 }, 00:10:40.209 "claimed": false, 00:10:40.209 "zoned": false, 00:10:40.209 "supported_io_types": { 00:10:40.209 "read": true, 00:10:40.209 "write": true, 00:10:40.209 "unmap": true, 00:10:40.209 "flush": false, 00:10:40.209 "reset": true, 00:10:40.209 "nvme_admin": false, 00:10:40.209 "nvme_io": false, 00:10:40.209 "nvme_io_md": false, 00:10:40.209 "write_zeroes": true, 00:10:40.209 "zcopy": false, 00:10:40.209 "get_zone_info": false, 00:10:40.209 "zone_management": false, 00:10:40.209 "zone_append": false, 00:10:40.209 "compare": false, 00:10:40.209 "compare_and_write": false, 00:10:40.209 "abort": false, 00:10:40.209 "seek_hole": true, 00:10:40.209 "seek_data": true, 00:10:40.209 "copy": false, 00:10:40.209 "nvme_iov_md": false 00:10:40.209 }, 00:10:40.209 "driver_specific": { 00:10:40.209 "lvol": { 00:10:40.209 "lvol_store_uuid": "341b1ab5-802d-4856-8783-0893a60f587f", 00:10:40.209 "base_bdev": "aio_bdev", 00:10:40.209 "thin_provision": false, 00:10:40.209 "num_allocated_clusters": 38, 00:10:40.209 "snapshot": false, 00:10:40.209 "clone": false, 00:10:40.209 "esnap_clone": false 00:10:40.209 } 00:10:40.209 } 00:10:40.209 } 00:10:40.209 ] 00:10:40.209 19:30:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:10:40.209 19:30:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 341b1ab5-802d-4856-8783-0893a60f587f 00:10:40.209 19:30:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:10:40.468 19:30:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:10:40.468 19:30:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 341b1ab5-802d-4856-8783-0893a60f587f 00:10:40.468 19:30:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:10:41.036 19:30:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:10:41.036 19:30:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:41.295 [2024-12-05 19:30:34.477392] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:10:41.295 19:30:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 341b1ab5-802d-4856-8783-0893a60f587f 00:10:41.295 19:30:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:10:41.295 19:30:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 341b1ab5-802d-4856-8783-0893a60f587f 00:10:41.295 19:30:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:41.295 19:30:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:41.295 19:30:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:41.295 19:30:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:41.295 19:30:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:41.295 19:30:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:41.295 19:30:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:41.295 19:30:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:10:41.295 19:30:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 341b1ab5-802d-4856-8783-0893a60f587f 00:10:41.554 request: 00:10:41.554 { 00:10:41.554 "uuid": "341b1ab5-802d-4856-8783-0893a60f587f", 00:10:41.554 "method": "bdev_lvol_get_lvstores", 00:10:41.554 "req_id": 1 00:10:41.554 } 00:10:41.554 Got JSON-RPC error response 00:10:41.554 response: 00:10:41.554 { 00:10:41.554 "code": -19, 00:10:41.554 "message": "No such device" 00:10:41.554 } 00:10:41.554 19:30:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:10:41.554 19:30:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:41.554 19:30:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:41.554 19:30:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:41.554 19:30:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:41.814 aio_bdev 00:10:41.814 19:30:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 98df53cf-d443-4f93-a13f-2c3fc60011fe 00:10:41.814 19:30:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=98df53cf-d443-4f93-a13f-2c3fc60011fe 00:10:41.814 19:30:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:41.814 19:30:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:10:41.814 19:30:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:41.814 19:30:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:41.814 19:30:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:42.073 19:30:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 98df53cf-d443-4f93-a13f-2c3fc60011fe -t 2000 00:10:42.332 [ 00:10:42.332 { 00:10:42.332 "name": "98df53cf-d443-4f93-a13f-2c3fc60011fe", 00:10:42.332 "aliases": [ 00:10:42.332 "lvs/lvol" 00:10:42.332 ], 00:10:42.332 "product_name": "Logical Volume", 00:10:42.332 "block_size": 4096, 00:10:42.332 "num_blocks": 38912, 00:10:42.332 "uuid": "98df53cf-d443-4f93-a13f-2c3fc60011fe", 00:10:42.332 "assigned_rate_limits": { 00:10:42.332 "rw_ios_per_sec": 0, 00:10:42.332 "rw_mbytes_per_sec": 0, 00:10:42.332 "r_mbytes_per_sec": 0, 00:10:42.332 "w_mbytes_per_sec": 0 00:10:42.332 }, 00:10:42.332 "claimed": false, 00:10:42.332 "zoned": false, 00:10:42.332 "supported_io_types": { 00:10:42.332 "read": true, 00:10:42.332 "write": true, 00:10:42.332 "unmap": true, 00:10:42.332 "flush": false, 00:10:42.332 "reset": true, 00:10:42.332 "nvme_admin": false, 00:10:42.332 "nvme_io": false, 00:10:42.332 "nvme_io_md": false, 00:10:42.332 "write_zeroes": true, 00:10:42.332 "zcopy": false, 00:10:42.332 "get_zone_info": false, 00:10:42.332 "zone_management": false, 00:10:42.332 "zone_append": false, 00:10:42.332 "compare": false, 00:10:42.332 "compare_and_write": false, 00:10:42.332 "abort": false, 00:10:42.332 "seek_hole": true, 00:10:42.332 "seek_data": true, 00:10:42.332 "copy": false, 00:10:42.332 "nvme_iov_md": false 00:10:42.332 }, 00:10:42.332 "driver_specific": { 00:10:42.332 "lvol": { 00:10:42.332 "lvol_store_uuid": "341b1ab5-802d-4856-8783-0893a60f587f", 00:10:42.332 "base_bdev": "aio_bdev", 00:10:42.332 "thin_provision": false, 00:10:42.332 "num_allocated_clusters": 38, 00:10:42.332 "snapshot": false, 00:10:42.332 "clone": false, 00:10:42.332 "esnap_clone": false 00:10:42.332 } 00:10:42.332 } 00:10:42.332 } 00:10:42.332 ] 00:10:42.333 19:30:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:10:42.333 19:30:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:10:42.333 19:30:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 341b1ab5-802d-4856-8783-0893a60f587f 00:10:42.592 19:30:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:10:42.592 19:30:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 341b1ab5-802d-4856-8783-0893a60f587f 00:10:42.592 19:30:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:10:42.852 19:30:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:10:42.852 19:30:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 98df53cf-d443-4f93-a13f-2c3fc60011fe 00:10:43.111 19:30:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 341b1ab5-802d-4856-8783-0893a60f587f 00:10:43.371 19:30:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:43.648 19:30:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:43.939 00:10:43.939 real 0m20.863s 00:10:43.939 user 0m42.108s 00:10:43.939 sys 0m8.590s 00:10:43.939 19:30:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:43.939 ************************************ 00:10:43.939 END TEST lvs_grow_dirty 00:10:43.939 ************************************ 00:10:43.939 19:30:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:43.939 19:30:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:10:43.939 19:30:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:10:43.939 19:30:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:10:43.939 19:30:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:10:43.939 19:30:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:10:43.939 19:30:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:10:43.939 19:30:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:10:43.939 19:30:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:10:43.939 19:30:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:10:43.939 nvmf_trace.0 00:10:43.939 19:30:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:10:43.939 19:30:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:10:43.939 19:30:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:43.939 19:30:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:10:44.197 19:30:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:44.197 19:30:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:10:44.197 19:30:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:44.197 19:30:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:44.197 rmmod nvme_tcp 00:10:44.197 rmmod nvme_fabrics 00:10:44.197 rmmod nvme_keyring 00:10:44.197 19:30:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:44.197 19:30:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:10:44.197 19:30:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:10:44.197 19:30:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 63679 ']' 00:10:44.197 19:30:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 63679 00:10:44.197 19:30:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 63679 ']' 00:10:44.197 19:30:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 63679 00:10:44.197 19:30:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:10:44.197 19:30:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:44.197 19:30:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63679 00:10:44.197 killing process with pid 63679 00:10:44.197 19:30:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:44.197 19:30:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:44.197 19:30:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63679' 00:10:44.197 19:30:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 63679 00:10:44.197 19:30:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 63679 00:10:44.456 19:30:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:44.456 19:30:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:44.456 19:30:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:44.456 19:30:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:10:44.456 19:30:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:10:44.456 19:30:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:10:44.456 19:30:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:44.456 19:30:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:44.456 19:30:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:44.456 19:30:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:44.456 19:30:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:44.456 19:30:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:44.456 19:30:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:44.456 19:30:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:44.456 19:30:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:44.456 19:30:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:44.456 19:30:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:44.456 19:30:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:44.715 19:30:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:44.715 19:30:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:44.715 19:30:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:44.715 19:30:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:44.715 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:44.715 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:44.715 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:44.715 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:44.715 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@300 -- # return 0 00:10:44.715 ************************************ 00:10:44.715 END TEST nvmf_lvs_grow 00:10:44.715 ************************************ 00:10:44.715 00:10:44.715 real 0m41.393s 00:10:44.715 user 1m5.763s 00:10:44.715 sys 0m12.133s 00:10:44.715 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:44.715 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:44.715 19:30:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:10:44.715 19:30:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:44.715 19:30:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:44.715 19:30:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:44.715 ************************************ 00:10:44.715 START TEST nvmf_bdev_io_wait 00:10:44.715 ************************************ 00:10:44.715 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:10:44.975 * Looking for test storage... 00:10:44.975 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:44.975 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:44.975 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:10:44.975 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:44.975 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:44.975 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:44.975 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:44.975 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:44.975 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:10:44.975 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:10:44.975 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:10:44.975 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:10:44.975 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:10:44.975 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:10:44.975 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:10:44.975 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:44.975 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:10:44.975 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:10:44.975 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:44.975 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:44.975 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:10:44.975 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:10:44.975 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:44.975 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:10:44.975 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:10:44.975 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:10:44.975 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:10:44.975 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:44.975 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:10:44.975 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:10:44.975 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:44.975 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:44.975 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:10:44.975 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:44.975 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:44.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.975 --rc genhtml_branch_coverage=1 00:10:44.975 --rc genhtml_function_coverage=1 00:10:44.975 --rc genhtml_legend=1 00:10:44.975 --rc geninfo_all_blocks=1 00:10:44.975 --rc geninfo_unexecuted_blocks=1 00:10:44.975 00:10:44.975 ' 00:10:44.975 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:44.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.975 --rc genhtml_branch_coverage=1 00:10:44.975 --rc genhtml_function_coverage=1 00:10:44.975 --rc genhtml_legend=1 00:10:44.975 --rc geninfo_all_blocks=1 00:10:44.975 --rc geninfo_unexecuted_blocks=1 00:10:44.975 00:10:44.975 ' 00:10:44.975 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:44.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.975 --rc genhtml_branch_coverage=1 00:10:44.975 --rc genhtml_function_coverage=1 00:10:44.975 --rc genhtml_legend=1 00:10:44.975 --rc geninfo_all_blocks=1 00:10:44.975 --rc geninfo_unexecuted_blocks=1 00:10:44.975 00:10:44.975 ' 00:10:44.975 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:44.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.975 --rc genhtml_branch_coverage=1 00:10:44.975 --rc genhtml_function_coverage=1 00:10:44.975 --rc genhtml_legend=1 00:10:44.975 --rc geninfo_all_blocks=1 00:10:44.975 --rc geninfo_unexecuted_blocks=1 00:10:44.975 00:10:44.975 ' 00:10:44.975 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:44.975 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:10:44.975 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:44.975 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:44.975 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:44.975 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:44.975 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:44.975 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:44.975 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:44.975 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:44.975 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:44.975 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:44.975 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:10:44.975 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:10:44.975 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:44.975 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:44.975 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:44.975 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:44.975 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:44.975 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:10:44.975 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:44.975 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:44.975 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:44.975 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.975 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.976 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.976 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:10:44.976 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.976 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:10:44.976 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:44.976 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:44.976 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:44.976 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:44.976 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:44.976 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:44.976 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:44.976 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:44.976 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:44.976 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:44.976 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:44.976 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:44.976 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:10:44.976 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:44.976 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:44.976 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:44.976 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:44.976 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:44.976 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:44.976 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:44.976 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:44.976 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:44.976 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:44.976 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:44.976 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:44.976 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:44.976 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:44.976 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:44.976 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:44.976 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:44.976 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:44.976 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:44.976 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:44.976 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:44.976 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:44.976 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:44.976 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:44.976 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:44.976 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:44.976 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:44.976 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:44.976 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:44.976 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:44.976 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:44.976 Cannot find device "nvmf_init_br" 00:10:44.976 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:10:44.976 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:44.976 Cannot find device "nvmf_init_br2" 00:10:44.976 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:10:44.976 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:44.976 Cannot find device "nvmf_tgt_br" 00:10:44.976 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # true 00:10:44.976 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:44.976 Cannot find device "nvmf_tgt_br2" 00:10:44.976 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # true 00:10:44.976 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:44.976 Cannot find device "nvmf_init_br" 00:10:44.976 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # true 00:10:44.976 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:44.976 Cannot find device "nvmf_init_br2" 00:10:44.976 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # true 00:10:44.976 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:44.976 Cannot find device "nvmf_tgt_br" 00:10:44.976 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # true 00:10:44.976 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:44.976 Cannot find device "nvmf_tgt_br2" 00:10:44.976 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # true 00:10:44.976 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:45.234 Cannot find device "nvmf_br" 00:10:45.234 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # true 00:10:45.234 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:45.234 Cannot find device "nvmf_init_if" 00:10:45.234 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # true 00:10:45.234 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:45.234 Cannot find device "nvmf_init_if2" 00:10:45.234 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # true 00:10:45.235 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:45.235 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:45.235 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # true 00:10:45.235 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:45.235 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:45.235 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # true 00:10:45.235 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:45.235 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:45.235 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:45.235 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:45.235 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:45.235 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:45.235 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:45.235 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:45.235 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:45.235 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:45.235 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:45.235 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:45.235 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:45.235 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:45.235 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:45.235 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:45.235 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:45.235 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:45.235 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:45.235 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:45.235 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:45.235 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:45.235 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:45.235 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:45.235 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:45.235 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:45.235 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:45.235 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:45.235 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:45.235 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:45.235 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:45.235 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:45.235 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:45.235 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:45.235 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.089 ms 00:10:45.235 00:10:45.235 --- 10.0.0.3 ping statistics --- 00:10:45.235 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:45.235 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:10:45.235 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:45.235 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:45.235 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:10:45.235 00:10:45.235 --- 10.0.0.4 ping statistics --- 00:10:45.235 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:45.235 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:10:45.235 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:45.493 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:45.493 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:10:45.493 00:10:45.493 --- 10.0.0.1 ping statistics --- 00:10:45.493 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:45.493 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:10:45.493 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:45.493 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:45.493 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 00:10:45.493 00:10:45.493 --- 10.0.0.2 ping statistics --- 00:10:45.493 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:45.493 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:10:45.493 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:45.493 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@461 -- # return 0 00:10:45.493 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:45.493 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:45.493 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:45.493 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:45.493 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:45.493 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:45.493 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:45.493 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:10:45.493 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:45.493 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:45.493 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:45.493 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=64050 00:10:45.493 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:10:45.493 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 64050 00:10:45.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:45.493 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 64050 ']' 00:10:45.493 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:45.493 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:45.493 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:45.493 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:45.493 19:30:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:45.493 [2024-12-05 19:30:38.770223] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:10:45.493 [2024-12-05 19:30:38.770613] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:45.493 [2024-12-05 19:30:38.924173] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:45.752 [2024-12-05 19:30:38.987452] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:45.752 [2024-12-05 19:30:38.987742] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:45.752 [2024-12-05 19:30:38.987931] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:45.752 [2024-12-05 19:30:38.988079] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:45.752 [2024-12-05 19:30:38.988156] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:45.752 [2024-12-05 19:30:38.989597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:45.752 [2024-12-05 19:30:38.989758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:45.752 [2024-12-05 19:30:38.989835] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:45.752 [2024-12-05 19:30:38.989834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:46.689 19:30:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:46.689 19:30:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:10:46.689 19:30:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:46.689 19:30:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:46.689 19:30:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:46.689 19:30:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:46.689 19:30:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:10:46.689 19:30:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.689 19:30:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:46.689 19:30:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.689 19:30:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:10:46.689 19:30:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.689 19:30:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:46.689 [2024-12-05 19:30:39.915348] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:46.689 19:30:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.689 19:30:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:46.689 19:30:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.689 19:30:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:46.689 [2024-12-05 19:30:39.928200] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:46.689 19:30:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.689 19:30:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:46.689 19:30:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.689 19:30:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:46.689 Malloc0 00:10:46.689 19:30:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.689 19:30:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:46.689 19:30:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.689 19:30:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:46.689 19:30:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.689 19:30:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:46.689 19:30:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.689 19:30:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:46.689 19:30:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.689 19:30:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:46.689 19:30:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.689 19:30:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:46.689 [2024-12-05 19:30:39.985966] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:46.689 19:30:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.689 19:30:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=64085 00:10:46.689 19:30:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:10:46.689 19:30:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:10:46.689 19:30:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:10:46.689 19:30:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=64087 00:10:46.689 19:30:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:10:46.689 19:30:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:46.689 19:30:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:46.689 { 00:10:46.689 "params": { 00:10:46.689 "name": "Nvme$subsystem", 00:10:46.689 "trtype": "$TEST_TRANSPORT", 00:10:46.689 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:46.689 "adrfam": "ipv4", 00:10:46.689 "trsvcid": "$NVMF_PORT", 00:10:46.689 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:46.689 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:46.689 "hdgst": ${hdgst:-false}, 00:10:46.689 "ddgst": ${ddgst:-false} 00:10:46.689 }, 00:10:46.689 "method": "bdev_nvme_attach_controller" 00:10:46.689 } 00:10:46.689 EOF 00:10:46.689 )") 00:10:46.689 19:30:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:10:46.689 19:30:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:10:46.689 19:30:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:10:46.689 19:30:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=64089 00:10:46.689 19:30:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:10:46.689 19:30:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:46.689 19:30:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:46.689 { 00:10:46.689 "params": { 00:10:46.689 "name": "Nvme$subsystem", 00:10:46.689 "trtype": "$TEST_TRANSPORT", 00:10:46.689 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:46.689 "adrfam": "ipv4", 00:10:46.689 "trsvcid": "$NVMF_PORT", 00:10:46.689 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:46.689 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:46.689 "hdgst": ${hdgst:-false}, 00:10:46.689 "ddgst": ${ddgst:-false} 00:10:46.689 }, 00:10:46.689 "method": "bdev_nvme_attach_controller" 00:10:46.689 } 00:10:46.689 EOF 00:10:46.689 )") 00:10:46.689 19:30:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:10:46.689 19:30:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:10:46.689 19:30:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:10:46.689 19:30:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:10:46.689 19:30:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:10:46.689 19:30:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:10:46.689 19:30:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:10:46.689 19:30:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:46.690 19:30:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:46.690 { 00:10:46.690 "params": { 00:10:46.690 "name": "Nvme$subsystem", 00:10:46.690 "trtype": "$TEST_TRANSPORT", 00:10:46.690 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:46.690 "adrfam": "ipv4", 00:10:46.690 "trsvcid": "$NVMF_PORT", 00:10:46.690 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:46.690 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:46.690 "hdgst": ${hdgst:-false}, 00:10:46.690 "ddgst": ${ddgst:-false} 00:10:46.690 }, 00:10:46.690 "method": "bdev_nvme_attach_controller" 00:10:46.690 } 00:10:46.690 EOF 00:10:46.690 )") 00:10:46.690 19:30:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:10:46.690 19:30:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:10:46.690 19:30:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:10:46.690 19:30:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:10:46.690 19:30:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:10:46.690 19:30:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:10:46.690 19:30:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:10:46.690 19:30:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:46.690 "params": { 00:10:46.690 "name": "Nvme1", 00:10:46.690 "trtype": "tcp", 00:10:46.690 "traddr": "10.0.0.3", 00:10:46.690 "adrfam": "ipv4", 00:10:46.690 "trsvcid": "4420", 00:10:46.690 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:46.690 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:46.690 "hdgst": false, 00:10:46.690 "ddgst": false 00:10:46.690 }, 00:10:46.690 "method": "bdev_nvme_attach_controller" 00:10:46.690 }' 00:10:46.690 19:30:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:46.690 19:30:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:46.690 { 00:10:46.690 "params": { 00:10:46.690 "name": "Nvme$subsystem", 00:10:46.690 "trtype": "$TEST_TRANSPORT", 00:10:46.690 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:46.690 "adrfam": "ipv4", 00:10:46.690 "trsvcid": "$NVMF_PORT", 00:10:46.690 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:46.690 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:46.690 "hdgst": ${hdgst:-false}, 00:10:46.690 "ddgst": ${ddgst:-false} 00:10:46.690 }, 00:10:46.690 "method": "bdev_nvme_attach_controller" 00:10:46.690 } 00:10:46.690 EOF 00:10:46.690 )") 00:10:46.690 19:30:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:10:46.690 19:30:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=64092 00:10:46.690 19:30:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:46.690 "params": { 00:10:46.690 "name": "Nvme1", 00:10:46.690 "trtype": "tcp", 00:10:46.690 "traddr": "10.0.0.3", 00:10:46.690 "adrfam": "ipv4", 00:10:46.690 "trsvcid": "4420", 00:10:46.690 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:46.690 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:46.690 "hdgst": false, 00:10:46.690 "ddgst": false 00:10:46.690 }, 00:10:46.690 "method": "bdev_nvme_attach_controller" 00:10:46.690 }' 00:10:46.690 19:30:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:10:46.690 19:30:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:10:46.690 19:30:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:10:46.690 19:30:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:10:46.690 19:30:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:10:46.690 19:30:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:46.690 "params": { 00:10:46.690 "name": "Nvme1", 00:10:46.690 "trtype": "tcp", 00:10:46.690 "traddr": "10.0.0.3", 00:10:46.690 "adrfam": "ipv4", 00:10:46.690 "trsvcid": "4420", 00:10:46.690 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:46.690 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:46.690 "hdgst": false, 00:10:46.690 "ddgst": false 00:10:46.690 }, 00:10:46.690 "method": "bdev_nvme_attach_controller" 00:10:46.690 }' 00:10:46.690 19:30:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:10:46.690 19:30:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:46.690 "params": { 00:10:46.690 "name": "Nvme1", 00:10:46.690 "trtype": "tcp", 00:10:46.690 "traddr": "10.0.0.3", 00:10:46.690 "adrfam": "ipv4", 00:10:46.690 "trsvcid": "4420", 00:10:46.690 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:46.690 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:46.690 "hdgst": false, 00:10:46.690 "ddgst": false 00:10:46.690 }, 00:10:46.690 "method": "bdev_nvme_attach_controller" 00:10:46.690 }' 00:10:46.690 [2024-12-05 19:30:40.055378] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:10:46.690 [2024-12-05 19:30:40.055389] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:10:46.690 [2024-12-05 19:30:40.055497] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:10:46.690 [2024-12-05 19:30:40.055830] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:10:46.690 19:30:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 64085 00:10:46.690 [2024-12-05 19:30:40.077899] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:10:46.690 [2024-12-05 19:30:40.078613] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:10:46.690 [2024-12-05 19:30:40.086659] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:10:46.690 [2024-12-05 19:30:40.086931] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:10:46.948 [2024-12-05 19:30:40.286814] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:46.948 [2024-12-05 19:30:40.350741] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:46.949 [2024-12-05 19:30:40.358586] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:46.949 [2024-12-05 19:30:40.364692] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:47.207 [2024-12-05 19:30:40.414561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:10:47.207 [2024-12-05 19:30:40.428980] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:47.207 [2024-12-05 19:30:40.438643] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:47.207 [2024-12-05 19:30:40.492815] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:10:47.207 [2024-12-05 19:30:40.506653] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:47.207 Running I/O for 1 seconds... 00:10:47.207 [2024-12-05 19:30:40.512497] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:47.207 Running I/O for 1 seconds... 00:10:47.207 [2024-12-05 19:30:40.565957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:10:47.207 [2024-12-05 19:30:40.579947] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:47.207 Running I/O for 1 seconds... 00:10:47.464 Running I/O for 1 seconds... 00:10:48.409 6438.00 IOPS, 25.15 MiB/s 00:10:48.409 Latency(us) 00:10:48.409 [2024-12-05T19:30:41.846Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:48.409 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:10:48.409 Nvme1n1 : 1.02 6439.80 25.16 0.00 0.00 19645.48 4706.68 37176.79 00:10:48.409 [2024-12-05T19:30:41.846Z] =================================================================================================================== 00:10:48.409 [2024-12-05T19:30:41.846Z] Total : 6439.80 25.16 0.00 0.00 19645.48 4706.68 37176.79 00:10:48.409 9669.00 IOPS, 37.77 MiB/s 00:10:48.409 Latency(us) 00:10:48.409 [2024-12-05T19:30:41.846Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:48.409 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:10:48.409 Nvme1n1 : 1.01 9722.20 37.98 0.00 0.00 13105.38 7923.90 23950.43 00:10:48.409 [2024-12-05T19:30:41.846Z] =================================================================================================================== 00:10:48.409 [2024-12-05T19:30:41.846Z] Total : 9722.20 37.98 0.00 0.00 13105.38 7923.90 23950.43 00:10:48.409 155496.00 IOPS, 607.41 MiB/s 00:10:48.409 Latency(us) 00:10:48.409 [2024-12-05T19:30:41.846Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:48.409 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:10:48.409 Nvme1n1 : 1.00 155184.63 606.19 0.00 0.00 820.35 374.23 1966.08 00:10:48.409 [2024-12-05T19:30:41.846Z] =================================================================================================================== 00:10:48.409 [2024-12-05T19:30:41.846Z] Total : 155184.63 606.19 0.00 0.00 820.35 374.23 1966.08 00:10:48.409 19:30:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 64087 00:10:48.409 6434.00 IOPS, 25.13 MiB/s 00:10:48.409 Latency(us) 00:10:48.409 [2024-12-05T19:30:41.846Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:48.409 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:10:48.409 Nvme1n1 : 1.01 6516.51 25.46 0.00 0.00 19563.85 6225.92 41943.04 00:10:48.409 [2024-12-05T19:30:41.846Z] =================================================================================================================== 00:10:48.410 [2024-12-05T19:30:41.847Z] Total : 6516.51 25.46 0.00 0.00 19563.85 6225.92 41943.04 00:10:48.694 19:30:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 64089 00:10:48.694 19:30:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 64092 00:10:48.695 19:30:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:48.695 19:30:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.695 19:30:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:48.695 19:30:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.695 19:30:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:10:48.695 19:30:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:10:48.695 19:30:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:48.695 19:30:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:10:48.695 19:30:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:48.695 19:30:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:10:48.695 19:30:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:48.695 19:30:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:48.695 rmmod nvme_tcp 00:10:48.695 rmmod nvme_fabrics 00:10:48.695 rmmod nvme_keyring 00:10:48.695 19:30:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:48.695 19:30:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:10:48.695 19:30:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:10:48.695 19:30:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 64050 ']' 00:10:48.695 19:30:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 64050 00:10:48.695 19:30:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 64050 ']' 00:10:48.695 19:30:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 64050 00:10:48.695 19:30:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:10:48.695 19:30:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:48.695 19:30:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64050 00:10:48.695 killing process with pid 64050 00:10:48.695 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:48.695 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:48.695 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64050' 00:10:48.695 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 64050 00:10:48.695 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 64050 00:10:48.953 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:48.953 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:48.953 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:48.953 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:10:48.953 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:10:48.953 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:48.953 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:10:48.953 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:48.953 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:48.953 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:48.953 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:48.953 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:48.953 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:48.953 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:48.953 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:48.953 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:48.953 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:48.953 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:48.953 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:48.953 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:48.953 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:49.212 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:49.212 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:49.212 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:49.212 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:49.212 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:49.212 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@300 -- # return 0 00:10:49.212 00:10:49.212 real 0m4.350s 00:10:49.212 user 0m17.516s 00:10:49.212 sys 0m2.360s 00:10:49.212 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:49.212 ************************************ 00:10:49.212 END TEST nvmf_bdev_io_wait 00:10:49.212 ************************************ 00:10:49.212 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:49.212 19:30:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:10:49.212 19:30:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:49.212 19:30:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:49.212 19:30:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:49.212 ************************************ 00:10:49.212 START TEST nvmf_queue_depth 00:10:49.212 ************************************ 00:10:49.212 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:10:49.212 * Looking for test storage... 00:10:49.212 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:49.212 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:49.212 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:10:49.212 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:49.472 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:49.472 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:49.472 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:49.472 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:49.472 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:10:49.472 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:10:49.472 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:10:49.472 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:10:49.472 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:10:49.472 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:10:49.472 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:10:49.472 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:49.472 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:10:49.472 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:10:49.472 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:49.472 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:49.472 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:10:49.472 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:10:49.472 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:49.472 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:10:49.472 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:10:49.472 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:10:49.472 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:10:49.472 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:49.472 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:10:49.472 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:10:49.473 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:49.473 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:49.473 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:10:49.473 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:49.473 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:49.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:49.473 --rc genhtml_branch_coverage=1 00:10:49.473 --rc genhtml_function_coverage=1 00:10:49.473 --rc genhtml_legend=1 00:10:49.473 --rc geninfo_all_blocks=1 00:10:49.473 --rc geninfo_unexecuted_blocks=1 00:10:49.473 00:10:49.473 ' 00:10:49.473 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:49.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:49.473 --rc genhtml_branch_coverage=1 00:10:49.473 --rc genhtml_function_coverage=1 00:10:49.473 --rc genhtml_legend=1 00:10:49.473 --rc geninfo_all_blocks=1 00:10:49.473 --rc geninfo_unexecuted_blocks=1 00:10:49.473 00:10:49.473 ' 00:10:49.473 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:49.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:49.473 --rc genhtml_branch_coverage=1 00:10:49.473 --rc genhtml_function_coverage=1 00:10:49.473 --rc genhtml_legend=1 00:10:49.473 --rc geninfo_all_blocks=1 00:10:49.473 --rc geninfo_unexecuted_blocks=1 00:10:49.473 00:10:49.473 ' 00:10:49.473 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:49.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:49.473 --rc genhtml_branch_coverage=1 00:10:49.473 --rc genhtml_function_coverage=1 00:10:49.473 --rc genhtml_legend=1 00:10:49.473 --rc geninfo_all_blocks=1 00:10:49.473 --rc geninfo_unexecuted_blocks=1 00:10:49.473 00:10:49.473 ' 00:10:49.473 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:49.473 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:10:49.473 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:49.473 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:49.473 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:49.473 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:49.473 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:49.473 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:49.473 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:49.473 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:49.473 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:49.473 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:49.473 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:10:49.473 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:10:49.473 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:49.473 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:49.473 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:49.473 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:49.473 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:49.473 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:10:49.473 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:49.473 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:49.473 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:49.473 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:49.473 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:49.473 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:49.473 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:10:49.473 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:49.473 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:10:49.473 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:49.473 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:49.473 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:49.473 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:49.473 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:49.473 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:49.473 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:49.473 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:49.473 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:49.473 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:49.473 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:10:49.473 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:10:49.473 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:49.473 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:10:49.473 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:49.473 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:49.473 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:49.473 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:49.473 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:49.473 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:49.473 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:49.473 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:49.473 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:49.473 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:49.473 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:49.473 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:49.473 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:49.473 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:49.473 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:49.473 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:49.473 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:49.473 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:49.473 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:49.473 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:49.473 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:49.473 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:49.473 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:49.473 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:49.473 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:49.473 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:49.473 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:49.474 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:49.474 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:49.474 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:49.474 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:49.474 Cannot find device "nvmf_init_br" 00:10:49.474 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:10:49.474 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:49.474 Cannot find device "nvmf_init_br2" 00:10:49.474 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:10:49.474 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:49.474 Cannot find device "nvmf_tgt_br" 00:10:49.474 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # true 00:10:49.474 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:49.474 Cannot find device "nvmf_tgt_br2" 00:10:49.474 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # true 00:10:49.474 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:49.474 Cannot find device "nvmf_init_br" 00:10:49.474 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # true 00:10:49.474 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:49.474 Cannot find device "nvmf_init_br2" 00:10:49.474 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # true 00:10:49.474 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:49.474 Cannot find device "nvmf_tgt_br" 00:10:49.474 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # true 00:10:49.474 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:49.474 Cannot find device "nvmf_tgt_br2" 00:10:49.474 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # true 00:10:49.474 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:49.474 Cannot find device "nvmf_br" 00:10:49.474 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # true 00:10:49.474 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:49.474 Cannot find device "nvmf_init_if" 00:10:49.474 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # true 00:10:49.474 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:49.474 Cannot find device "nvmf_init_if2" 00:10:49.474 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # true 00:10:49.474 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:49.474 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:49.474 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # true 00:10:49.474 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:49.474 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:49.474 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # true 00:10:49.474 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:49.474 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:49.474 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:49.474 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:49.474 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:49.474 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:49.733 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:49.733 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:49.733 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:49.733 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:49.733 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:49.733 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:49.733 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:49.733 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:49.733 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:49.733 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:49.733 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:49.733 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:49.733 19:30:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:49.733 19:30:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:49.733 19:30:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:49.733 19:30:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:49.733 19:30:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:49.733 19:30:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:49.733 19:30:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:49.733 19:30:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:49.733 19:30:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:49.733 19:30:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:49.733 19:30:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:49.733 19:30:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:49.733 19:30:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:49.733 19:30:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:49.733 19:30:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:49.733 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:49.733 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:10:49.733 00:10:49.733 --- 10.0.0.3 ping statistics --- 00:10:49.733 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:49.733 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:10:49.733 19:30:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:49.733 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:49.733 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.084 ms 00:10:49.733 00:10:49.733 --- 10.0.0.4 ping statistics --- 00:10:49.733 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:49.733 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:10:49.733 19:30:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:49.733 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:49.733 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:10:49.733 00:10:49.733 --- 10.0.0.1 ping statistics --- 00:10:49.733 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:49.733 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:10:49.733 19:30:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:49.733 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:49.733 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:10:49.733 00:10:49.733 --- 10.0.0.2 ping statistics --- 00:10:49.733 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:49.733 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:10:49.733 19:30:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:49.733 19:30:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@461 -- # return 0 00:10:49.733 19:30:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:49.733 19:30:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:49.733 19:30:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:49.733 19:30:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:49.733 19:30:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:49.733 19:30:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:49.733 19:30:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:49.733 19:30:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:10:49.734 19:30:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:49.734 19:30:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:49.734 19:30:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:49.734 19:30:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=64377 00:10:49.734 19:30:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 64377 00:10:49.734 19:30:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:49.734 19:30:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 64377 ']' 00:10:49.734 19:30:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:49.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:49.734 19:30:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:49.734 19:30:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:49.734 19:30:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:49.734 19:30:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:49.992 [2024-12-05 19:30:43.205414] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:10:49.992 [2024-12-05 19:30:43.205775] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:49.992 [2024-12-05 19:30:43.359716] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:49.992 [2024-12-05 19:30:43.416883] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:49.992 [2024-12-05 19:30:43.417134] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:49.992 [2024-12-05 19:30:43.417154] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:49.992 [2024-12-05 19:30:43.417179] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:49.992 [2024-12-05 19:30:43.417186] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:49.992 [2024-12-05 19:30:43.417650] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:50.252 [2024-12-05 19:30:43.472574] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:50.252 19:30:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:50.252 19:30:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:10:50.252 19:30:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:50.252 19:30:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:50.252 19:30:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:50.252 19:30:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:50.252 19:30:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:50.252 19:30:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.252 19:30:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:50.252 [2024-12-05 19:30:43.585460] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:50.252 19:30:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.252 19:30:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:50.252 19:30:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.252 19:30:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:50.252 Malloc0 00:10:50.252 19:30:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.252 19:30:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:50.252 19:30:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.252 19:30:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:50.252 19:30:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.252 19:30:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:50.252 19:30:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.252 19:30:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:50.252 19:30:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.252 19:30:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:50.252 19:30:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.252 19:30:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:50.252 [2024-12-05 19:30:43.633269] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:50.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:50.252 19:30:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.252 19:30:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=64407 00:10:50.252 19:30:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:10:50.252 19:30:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:50.252 19:30:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 64407 /var/tmp/bdevperf.sock 00:10:50.252 19:30:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 64407 ']' 00:10:50.252 19:30:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:50.252 19:30:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:50.252 19:30:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:50.252 19:30:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:50.252 19:30:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:50.512 [2024-12-05 19:30:43.698165] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:10:50.512 [2024-12-05 19:30:43.698570] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64407 ] 00:10:50.512 [2024-12-05 19:30:43.852465] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:50.512 [2024-12-05 19:30:43.916895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:50.770 [2024-12-05 19:30:43.975478] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:50.771 19:30:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:50.771 19:30:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:10:50.771 19:30:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:10:50.771 19:30:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.771 19:30:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:50.771 NVMe0n1 00:10:50.771 19:30:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.771 19:30:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:51.029 Running I/O for 10 seconds... 00:10:52.902 6726.00 IOPS, 26.27 MiB/s [2024-12-05T19:30:47.376Z] 7224.00 IOPS, 28.22 MiB/s [2024-12-05T19:30:48.314Z] 7528.00 IOPS, 29.41 MiB/s [2024-12-05T19:30:49.693Z] 7936.00 IOPS, 31.00 MiB/s [2024-12-05T19:30:50.261Z] 7951.80 IOPS, 31.06 MiB/s [2024-12-05T19:30:51.637Z] 7928.17 IOPS, 30.97 MiB/s [2024-12-05T19:30:52.618Z] 7935.71 IOPS, 31.00 MiB/s [2024-12-05T19:30:53.555Z] 7958.25 IOPS, 31.09 MiB/s [2024-12-05T19:30:54.490Z] 8009.44 IOPS, 31.29 MiB/s [2024-12-05T19:30:54.490Z] 8138.90 IOPS, 31.79 MiB/s 00:11:01.053 Latency(us) 00:11:01.053 [2024-12-05T19:30:54.490Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:01.053 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:11:01.053 Verification LBA range: start 0x0 length 0x4000 00:11:01.053 NVMe0n1 : 10.06 8186.86 31.98 0.00 0.00 124524.70 14834.97 97708.22 00:11:01.053 [2024-12-05T19:30:54.490Z] =================================================================================================================== 00:11:01.053 [2024-12-05T19:30:54.490Z] Total : 8186.86 31.98 0.00 0.00 124524.70 14834.97 97708.22 00:11:01.053 { 00:11:01.053 "results": [ 00:11:01.053 { 00:11:01.053 "job": "NVMe0n1", 00:11:01.053 "core_mask": "0x1", 00:11:01.053 "workload": "verify", 00:11:01.053 "status": "finished", 00:11:01.053 "verify_range": { 00:11:01.053 "start": 0, 00:11:01.053 "length": 16384 00:11:01.053 }, 00:11:01.053 "queue_depth": 1024, 00:11:01.053 "io_size": 4096, 00:11:01.053 "runtime": 10.064538, 00:11:01.053 "iops": 8186.863619572006, 00:11:01.053 "mibps": 31.97993601395315, 00:11:01.053 "io_failed": 0, 00:11:01.053 "io_timeout": 0, 00:11:01.053 "avg_latency_us": 124524.69642835628, 00:11:01.053 "min_latency_us": 14834.967272727272, 00:11:01.053 "max_latency_us": 97708.21818181819 00:11:01.053 } 00:11:01.053 ], 00:11:01.053 "core_count": 1 00:11:01.053 } 00:11:01.053 19:30:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 64407 00:11:01.053 19:30:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 64407 ']' 00:11:01.053 19:30:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 64407 00:11:01.053 19:30:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:11:01.053 19:30:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:01.053 19:30:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64407 00:11:01.053 killing process with pid 64407 00:11:01.053 Received shutdown signal, test time was about 10.000000 seconds 00:11:01.053 00:11:01.053 Latency(us) 00:11:01.053 [2024-12-05T19:30:54.490Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:01.053 [2024-12-05T19:30:54.490Z] =================================================================================================================== 00:11:01.053 [2024-12-05T19:30:54.490Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:01.053 19:30:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:01.053 19:30:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:01.053 19:30:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64407' 00:11:01.053 19:30:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 64407 00:11:01.053 19:30:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 64407 00:11:01.311 19:30:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:11:01.311 19:30:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:11:01.311 19:30:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:01.311 19:30:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:11:01.311 19:30:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:01.311 19:30:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:11:01.311 19:30:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:01.311 19:30:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:01.311 rmmod nvme_tcp 00:11:01.311 rmmod nvme_fabrics 00:11:01.311 rmmod nvme_keyring 00:11:01.311 19:30:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:01.311 19:30:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:11:01.311 19:30:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:11:01.311 19:30:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 64377 ']' 00:11:01.311 19:30:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 64377 00:11:01.311 19:30:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 64377 ']' 00:11:01.311 19:30:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 64377 00:11:01.311 19:30:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:11:01.311 19:30:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:01.311 19:30:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64377 00:11:01.311 killing process with pid 64377 00:11:01.311 19:30:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:11:01.311 19:30:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:11:01.311 19:30:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64377' 00:11:01.311 19:30:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 64377 00:11:01.311 19:30:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 64377 00:11:01.570 19:30:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:01.570 19:30:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:01.570 19:30:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:01.570 19:30:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:11:01.570 19:30:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:11:01.570 19:30:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:01.570 19:30:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:11:01.570 19:30:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:01.570 19:30:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:01.570 19:30:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:01.570 19:30:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:01.570 19:30:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:01.570 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:01.829 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:01.829 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:01.829 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:01.829 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:01.829 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:01.829 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:01.829 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:01.829 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:01.829 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:01.829 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:01.829 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:01.829 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:01.829 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:01.829 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@300 -- # return 0 00:11:01.829 00:11:01.829 real 0m12.693s 00:11:01.829 user 0m21.626s 00:11:01.829 sys 0m2.144s 00:11:01.829 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:01.829 ************************************ 00:11:01.829 END TEST nvmf_queue_depth 00:11:01.829 ************************************ 00:11:01.829 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:01.829 19:30:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:11:01.829 19:30:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:01.829 19:30:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:01.829 19:30:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:01.829 ************************************ 00:11:01.829 START TEST nvmf_target_multipath 00:11:01.829 ************************************ 00:11:01.829 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:11:02.088 * Looking for test storage... 00:11:02.088 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:02.088 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:02.088 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:02.088 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:11:02.089 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:02.089 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:02.089 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:02.089 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:02.089 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:11:02.089 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:11:02.089 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:11:02.089 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:11:02.089 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:11:02.089 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:11:02.089 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:11:02.089 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:02.089 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:11:02.089 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:11:02.089 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:02.089 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:02.089 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:11:02.089 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:11:02.089 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:02.089 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:11:02.089 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:11:02.089 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:11:02.089 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:11:02.089 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:02.089 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:11:02.089 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:11:02.089 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:02.089 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:02.089 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:11:02.089 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:02.089 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:02.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:02.089 --rc genhtml_branch_coverage=1 00:11:02.089 --rc genhtml_function_coverage=1 00:11:02.089 --rc genhtml_legend=1 00:11:02.089 --rc geninfo_all_blocks=1 00:11:02.089 --rc geninfo_unexecuted_blocks=1 00:11:02.089 00:11:02.089 ' 00:11:02.089 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:02.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:02.089 --rc genhtml_branch_coverage=1 00:11:02.089 --rc genhtml_function_coverage=1 00:11:02.089 --rc genhtml_legend=1 00:11:02.089 --rc geninfo_all_blocks=1 00:11:02.089 --rc geninfo_unexecuted_blocks=1 00:11:02.089 00:11:02.089 ' 00:11:02.089 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:02.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:02.089 --rc genhtml_branch_coverage=1 00:11:02.089 --rc genhtml_function_coverage=1 00:11:02.089 --rc genhtml_legend=1 00:11:02.089 --rc geninfo_all_blocks=1 00:11:02.089 --rc geninfo_unexecuted_blocks=1 00:11:02.089 00:11:02.089 ' 00:11:02.089 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:02.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:02.089 --rc genhtml_branch_coverage=1 00:11:02.089 --rc genhtml_function_coverage=1 00:11:02.089 --rc genhtml_legend=1 00:11:02.089 --rc geninfo_all_blocks=1 00:11:02.089 --rc geninfo_unexecuted_blocks=1 00:11:02.089 00:11:02.089 ' 00:11:02.089 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:02.089 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:11:02.089 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:02.089 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:02.089 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:02.089 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:02.089 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:02.089 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:02.089 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:02.089 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:02.089 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:02.089 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:02.089 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:11:02.089 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:11:02.089 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:02.089 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:02.089 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:02.089 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:02.089 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:02.089 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:11:02.089 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:02.089 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:02.089 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:02.089 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.090 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.090 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.090 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:11:02.090 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.090 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:11:02.090 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:02.090 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:02.090 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:02.090 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:02.090 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:02.090 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:02.090 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:02.090 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:02.090 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:02.090 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:02.090 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:02.090 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:02.090 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:11:02.090 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:02.090 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:11:02.090 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:02.090 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:02.090 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:02.090 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:02.090 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:02.090 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:02.090 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:02.090 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:02.090 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:02.090 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:02.090 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:02.090 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:02.090 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:02.090 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:02.090 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:02.090 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:02.090 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:02.090 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:02.090 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:02.090 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:02.090 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:02.090 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:02.090 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:02.090 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:02.090 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:02.090 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:02.090 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:02.090 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:02.090 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:02.090 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:02.090 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:02.090 Cannot find device "nvmf_init_br" 00:11:02.090 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:11:02.090 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:02.090 Cannot find device "nvmf_init_br2" 00:11:02.090 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:11:02.090 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:02.090 Cannot find device "nvmf_tgt_br" 00:11:02.090 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # true 00:11:02.090 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:02.348 Cannot find device "nvmf_tgt_br2" 00:11:02.348 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # true 00:11:02.348 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:02.348 Cannot find device "nvmf_init_br" 00:11:02.348 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # true 00:11:02.348 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:02.348 Cannot find device "nvmf_init_br2" 00:11:02.348 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # true 00:11:02.348 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:02.348 Cannot find device "nvmf_tgt_br" 00:11:02.348 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # true 00:11:02.348 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:02.348 Cannot find device "nvmf_tgt_br2" 00:11:02.348 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # true 00:11:02.348 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:02.348 Cannot find device "nvmf_br" 00:11:02.348 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # true 00:11:02.348 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:02.348 Cannot find device "nvmf_init_if" 00:11:02.348 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # true 00:11:02.348 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:02.348 Cannot find device "nvmf_init_if2" 00:11:02.348 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # true 00:11:02.348 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:02.348 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:02.348 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # true 00:11:02.348 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:02.348 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:02.348 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # true 00:11:02.348 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:02.348 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:02.348 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:02.348 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:02.348 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:02.348 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:02.348 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:02.348 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:02.348 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:02.348 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:02.348 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:02.348 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:02.348 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:02.348 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:02.348 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:02.348 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:02.606 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:02.606 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:02.606 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:02.606 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:02.606 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:02.606 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:02.606 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:02.606 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:02.606 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:02.606 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:02.606 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:02.606 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:02.606 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:02.606 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:02.606 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:02.606 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:02.606 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:02.606 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:02.606 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.114 ms 00:11:02.606 00:11:02.606 --- 10.0.0.3 ping statistics --- 00:11:02.606 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:02.606 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:11:02.606 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:02.606 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:02.606 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.047 ms 00:11:02.606 00:11:02.606 --- 10.0.0.4 ping statistics --- 00:11:02.606 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:02.606 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:11:02.606 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:02.606 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:02.606 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:11:02.606 00:11:02.606 --- 10.0.0.1 ping statistics --- 00:11:02.606 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:02.606 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:11:02.606 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:02.606 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:02.606 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:11:02.606 00:11:02.606 --- 10.0.0.2 ping statistics --- 00:11:02.606 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:02.606 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:11:02.606 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:02.606 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@461 -- # return 0 00:11:02.606 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:02.606 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:02.606 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:02.606 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:02.606 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:02.606 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:02.606 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:02.606 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.4 ']' 00:11:02.606 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:11:02.606 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:11:02.606 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:02.606 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:02.606 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:02.606 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@509 -- # nvmfpid=64771 00:11:02.606 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:02.607 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@510 -- # waitforlisten 64771 00:11:02.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:02.607 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@835 -- # '[' -z 64771 ']' 00:11:02.607 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:02.607 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:02.607 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:02.607 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:02.607 19:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:02.607 [2024-12-05 19:30:55.980517] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:11:02.607 [2024-12-05 19:30:55.980849] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:02.865 [2024-12-05 19:30:56.124435] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:02.865 [2024-12-05 19:30:56.195380] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:02.865 [2024-12-05 19:30:56.195674] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:02.865 [2024-12-05 19:30:56.195865] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:02.865 [2024-12-05 19:30:56.196001] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:02.865 [2024-12-05 19:30:56.196282] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:02.865 [2024-12-05 19:30:56.197545] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:02.865 [2024-12-05 19:30:56.197676] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:02.865 [2024-12-05 19:30:56.197757] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:02.865 [2024-12-05 19:30:56.197756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:02.865 [2024-12-05 19:30:56.253905] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:03.123 19:30:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:03.123 19:30:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@868 -- # return 0 00:11:03.123 19:30:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:03.123 19:30:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:03.123 19:30:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:03.123 19:30:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:03.123 19:30:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:03.381 [2024-12-05 19:30:56.688938] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:03.381 19:30:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:11:03.954 Malloc0 00:11:03.954 19:30:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:11:04.212 19:30:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:04.482 19:30:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:04.739 [2024-12-05 19:30:58.085703] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:04.739 19:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 00:11:04.998 [2024-12-05 19:30:58.433981] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:11:05.256 19:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --hostid=bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:11:05.256 19:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --hostid=bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.4 -s 4420 -g -G 00:11:05.513 19:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:11:05.513 19:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1202 -- # local i=0 00:11:05.513 19:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:05.513 19:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:05.513 19:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # sleep 2 00:11:07.414 19:31:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:07.414 19:31:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:07.414 19:31:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:07.414 19:31:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:07.414 19:31:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:07.414 19:31:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # return 0 00:11:07.414 19:31:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:11:07.414 19:31:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:11:07.414 19:31:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:11:07.414 19:31:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:11:07.414 19:31:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:11:07.414 19:31:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:11:07.414 19:31:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:11:07.414 19:31:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:11:07.414 19:31:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:11:07.414 19:31:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:11:07.414 19:31:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:11:07.414 19:31:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:11:07.414 19:31:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:11:07.414 19:31:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:11:07.414 19:31:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:11:07.414 19:31:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:07.414 19:31:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:07.414 19:31:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:07.414 19:31:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:11:07.414 19:31:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:11:07.414 19:31:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:11:07.414 19:31:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:07.414 19:31:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:07.414 19:31:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:07.414 19:31:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:11:07.414 19:31:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:11:07.414 19:31:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=64864 00:11:07.414 19:31:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:11:07.414 19:31:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:11:07.414 [global] 00:11:07.414 thread=1 00:11:07.414 invalidate=1 00:11:07.414 rw=randrw 00:11:07.414 time_based=1 00:11:07.414 runtime=6 00:11:07.414 ioengine=libaio 00:11:07.414 direct=1 00:11:07.414 bs=4096 00:11:07.414 iodepth=128 00:11:07.414 norandommap=0 00:11:07.414 numjobs=1 00:11:07.414 00:11:07.414 verify_dump=1 00:11:07.414 verify_backlog=512 00:11:07.414 verify_state_save=0 00:11:07.414 do_verify=1 00:11:07.414 verify=crc32c-intel 00:11:07.414 [job0] 00:11:07.414 filename=/dev/nvme0n1 00:11:07.414 Could not set queue depth (nvme0n1) 00:11:07.673 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:07.673 fio-3.35 00:11:07.673 Starting 1 thread 00:11:08.610 19:31:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:11:08.868 19:31:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:11:09.218 19:31:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:11:09.218 19:31:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:11:09.218 19:31:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:09.218 19:31:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:09.218 19:31:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:09.218 19:31:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:11:09.218 19:31:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:11:09.218 19:31:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:11:09.218 19:31:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:09.218 19:31:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:09.218 19:31:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:09.218 19:31:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:11:09.218 19:31:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:11:09.218 19:31:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:11:09.477 19:31:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:11:09.477 19:31:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:11:09.477 19:31:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:09.477 19:31:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:09.477 19:31:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:09.477 19:31:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:11:09.477 19:31:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:11:09.477 19:31:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:11:09.477 19:31:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:09.477 19:31:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:09.477 19:31:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:09.477 19:31:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:11:09.477 19:31:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 64864 00:11:13.661 00:11:13.662 job0: (groupid=0, jobs=1): err= 0: pid=64885: Thu Dec 5 19:31:07 2024 00:11:13.662 read: IOPS=9946, BW=38.9MiB/s (40.7MB/s)(233MiB/6007msec) 00:11:13.662 slat (usec): min=2, max=12356, avg=60.06, stdev=252.84 00:11:13.662 clat (usec): min=1552, max=21873, avg=8788.36, stdev=1768.19 00:11:13.662 lat (usec): min=1596, max=21882, avg=8848.41, stdev=1775.15 00:11:13.662 clat percentiles (usec): 00:11:13.662 | 1.00th=[ 4424], 5.00th=[ 6456], 10.00th=[ 7308], 20.00th=[ 7832], 00:11:13.662 | 30.00th=[ 8094], 40.00th=[ 8291], 50.00th=[ 8586], 60.00th=[ 8848], 00:11:13.662 | 70.00th=[ 9110], 80.00th=[ 9503], 90.00th=[10421], 95.00th=[12518], 00:11:13.662 | 99.00th=[14222], 99.50th=[15533], 99.90th=[20579], 99.95th=[20841], 00:11:13.662 | 99.99th=[21627] 00:11:13.662 bw ( KiB/s): min= 5984, max=25912, per=51.82%, avg=20617.45, stdev=6233.52, samples=11 00:11:13.662 iops : min= 1496, max= 6478, avg=5154.36, stdev=1558.38, samples=11 00:11:13.662 write: IOPS=5985, BW=23.4MiB/s (24.5MB/s)(122MiB/5238msec); 0 zone resets 00:11:13.662 slat (usec): min=3, max=4529, avg=66.75, stdev=168.84 00:11:13.662 clat (usec): min=1183, max=17099, avg=7553.04, stdev=1426.29 00:11:13.662 lat (usec): min=1238, max=17123, avg=7619.79, stdev=1431.38 00:11:13.662 clat percentiles (usec): 00:11:13.662 | 1.00th=[ 3392], 5.00th=[ 4490], 10.00th=[ 5866], 20.00th=[ 6915], 00:11:13.662 | 30.00th=[ 7242], 40.00th=[ 7504], 50.00th=[ 7701], 60.00th=[ 7898], 00:11:13.662 | 70.00th=[ 8094], 80.00th=[ 8356], 90.00th=[ 8717], 95.00th=[ 9241], 00:11:13.662 | 99.00th=[11994], 99.50th=[12780], 99.90th=[15139], 99.95th=[15926], 00:11:13.662 | 99.99th=[16188] 00:11:13.662 bw ( KiB/s): min= 6304, max=25672, per=86.42%, avg=20692.36, stdev=6084.70, samples=11 00:11:13.662 iops : min= 1576, max= 6418, avg=5173.09, stdev=1521.17, samples=11 00:11:13.662 lat (msec) : 2=0.03%, 4=1.39%, 10=89.21%, 20=9.19%, 50=0.18% 00:11:13.662 cpu : usr=5.29%, sys=20.41%, ctx=5284, majf=0, minf=108 00:11:13.662 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:11:13.662 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:13.662 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:13.662 issued rwts: total=59748,31353,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:13.662 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:13.662 00:11:13.662 Run status group 0 (all jobs): 00:11:13.662 READ: bw=38.9MiB/s (40.7MB/s), 38.9MiB/s-38.9MiB/s (40.7MB/s-40.7MB/s), io=233MiB (245MB), run=6007-6007msec 00:11:13.662 WRITE: bw=23.4MiB/s (24.5MB/s), 23.4MiB/s-23.4MiB/s (24.5MB/s-24.5MB/s), io=122MiB (128MB), run=5238-5238msec 00:11:13.662 00:11:13.662 Disk stats (read/write): 00:11:13.662 nvme0n1: ios=58917/30748, merge=0/0, ticks=496687/218333, in_queue=715020, util=98.60% 00:11:13.662 19:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:11:14.229 19:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n optimized 00:11:14.518 19:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:11:14.518 19:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:11:14.518 19:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:14.518 19:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:14.518 19:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:14.518 19:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:11:14.518 19:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:11:14.518 19:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:11:14.518 19:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:14.518 19:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:14.518 19:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:14.518 19:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:11:14.518 19:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:11:14.518 19:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:11:14.518 19:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=64969 00:11:14.518 19:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:11:14.518 [global] 00:11:14.518 thread=1 00:11:14.518 invalidate=1 00:11:14.518 rw=randrw 00:11:14.518 time_based=1 00:11:14.518 runtime=6 00:11:14.518 ioengine=libaio 00:11:14.518 direct=1 00:11:14.518 bs=4096 00:11:14.518 iodepth=128 00:11:14.518 norandommap=0 00:11:14.518 numjobs=1 00:11:14.518 00:11:14.518 verify_dump=1 00:11:14.518 verify_backlog=512 00:11:14.518 verify_state_save=0 00:11:14.518 do_verify=1 00:11:14.518 verify=crc32c-intel 00:11:14.518 [job0] 00:11:14.518 filename=/dev/nvme0n1 00:11:14.518 Could not set queue depth (nvme0n1) 00:11:14.518 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:14.518 fio-3.35 00:11:14.518 Starting 1 thread 00:11:15.465 19:31:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:11:15.724 19:31:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:11:15.984 19:31:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:11:15.984 19:31:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:11:15.984 19:31:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:15.984 19:31:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:15.984 19:31:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:15.984 19:31:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:11:15.984 19:31:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:11:15.984 19:31:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:11:15.984 19:31:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:15.984 19:31:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:15.984 19:31:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:15.984 19:31:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:11:15.984 19:31:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:11:16.244 19:31:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:11:16.502 19:31:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:11:16.503 19:31:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:11:16.503 19:31:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:16.503 19:31:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:16.503 19:31:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:16.503 19:31:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:11:16.503 19:31:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:11:16.503 19:31:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:11:16.503 19:31:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:16.503 19:31:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:16.503 19:31:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:16.503 19:31:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:11:16.503 19:31:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 64969 00:11:20.694 00:11:20.694 job0: (groupid=0, jobs=1): err= 0: pid=64990: Thu Dec 5 19:31:14 2024 00:11:20.694 read: IOPS=11.0k, BW=42.8MiB/s (44.9MB/s)(257MiB/6005msec) 00:11:20.694 slat (usec): min=2, max=9910, avg=45.69, stdev=214.77 00:11:20.694 clat (usec): min=298, max=19852, avg=8039.98, stdev=2554.05 00:11:20.694 lat (usec): min=308, max=19865, avg=8085.67, stdev=2570.08 00:11:20.694 clat percentiles (usec): 00:11:20.694 | 1.00th=[ 1598], 5.00th=[ 2868], 10.00th=[ 3982], 20.00th=[ 6325], 00:11:20.694 | 30.00th=[ 7701], 40.00th=[ 8225], 50.00th=[ 8455], 60.00th=[ 8717], 00:11:20.694 | 70.00th=[ 8979], 80.00th=[ 9503], 90.00th=[10552], 95.00th=[12256], 00:11:20.694 | 99.00th=[14091], 99.50th=[14746], 99.90th=[17695], 99.95th=[18482], 00:11:20.694 | 99.99th=[19268] 00:11:20.694 bw ( KiB/s): min= 4576, max=35392, per=52.87%, avg=23182.73, stdev=8514.30, samples=11 00:11:20.694 iops : min= 1144, max= 8848, avg=5795.64, stdev=2128.58, samples=11 00:11:20.694 write: IOPS=6504, BW=25.4MiB/s (26.6MB/s)(135MiB/5315msec); 0 zone resets 00:11:20.694 slat (usec): min=3, max=2686, avg=55.21, stdev=147.05 00:11:20.694 clat (usec): min=608, max=18666, avg=6742.87, stdev=2156.45 00:11:20.694 lat (usec): min=630, max=19696, avg=6798.08, stdev=2173.10 00:11:20.694 clat percentiles (usec): 00:11:20.694 | 1.00th=[ 1958], 5.00th=[ 2769], 10.00th=[ 3425], 20.00th=[ 4555], 00:11:20.694 | 30.00th=[ 5866], 40.00th=[ 6915], 50.00th=[ 7373], 60.00th=[ 7701], 00:11:20.694 | 70.00th=[ 7963], 80.00th=[ 8225], 90.00th=[ 8717], 95.00th=[ 9241], 00:11:20.694 | 99.00th=[11994], 99.50th=[12649], 99.90th=[15270], 99.95th=[16909], 00:11:20.694 | 99.99th=[18744] 00:11:20.694 bw ( KiB/s): min= 4952, max=36176, per=89.20%, avg=23209.73, stdev=8345.75, samples=11 00:11:20.694 iops : min= 1238, max= 9044, avg=5802.36, stdev=2086.45, samples=11 00:11:20.694 lat (usec) : 500=0.02%, 750=0.04%, 1000=0.15% 00:11:20.694 lat (msec) : 2=1.45%, 4=10.06%, 10=78.70%, 20=9.58% 00:11:20.694 cpu : usr=5.90%, sys=21.05%, ctx=5814, majf=0, minf=127 00:11:20.694 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:11:20.694 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:20.694 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:20.694 issued rwts: total=65828,34572,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:20.694 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:20.694 00:11:20.694 Run status group 0 (all jobs): 00:11:20.694 READ: bw=42.8MiB/s (44.9MB/s), 42.8MiB/s-42.8MiB/s (44.9MB/s-44.9MB/s), io=257MiB (270MB), run=6005-6005msec 00:11:20.694 WRITE: bw=25.4MiB/s (26.6MB/s), 25.4MiB/s-25.4MiB/s (26.6MB/s-26.6MB/s), io=135MiB (142MB), run=5315-5315msec 00:11:20.694 00:11:20.694 Disk stats (read/write): 00:11:20.694 nvme0n1: ios=64930/34030, merge=0/0, ticks=499947/215042, in_queue=714989, util=98.66% 00:11:20.694 19:31:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:20.694 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:11:20.694 19:31:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:20.694 19:31:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1223 -- # local i=0 00:11:20.694 19:31:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:20.694 19:31:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:20.694 19:31:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:20.694 19:31:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:20.694 19:31:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1235 -- # return 0 00:11:20.694 19:31:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:21.262 19:31:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:11:21.262 19:31:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:11:21.262 19:31:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:11:21.262 19:31:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:11:21.262 19:31:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:21.262 19:31:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:11:21.262 19:31:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:21.262 19:31:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:11:21.262 19:31:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:21.262 19:31:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:21.262 rmmod nvme_tcp 00:11:21.262 rmmod nvme_fabrics 00:11:21.262 rmmod nvme_keyring 00:11:21.262 19:31:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:21.262 19:31:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:11:21.262 19:31:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:11:21.262 19:31:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n 64771 ']' 00:11:21.262 19:31:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # killprocess 64771 00:11:21.262 19:31:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@954 -- # '[' -z 64771 ']' 00:11:21.262 19:31:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@958 -- # kill -0 64771 00:11:21.262 19:31:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # uname 00:11:21.262 19:31:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:21.262 19:31:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64771 00:11:21.262 killing process with pid 64771 00:11:21.262 19:31:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:21.263 19:31:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:21.263 19:31:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64771' 00:11:21.263 19:31:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@973 -- # kill 64771 00:11:21.263 19:31:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@978 -- # wait 64771 00:11:21.522 19:31:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:21.522 19:31:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:21.522 19:31:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:21.522 19:31:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:11:21.522 19:31:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:21.522 19:31:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:11:21.522 19:31:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:11:21.522 19:31:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:21.522 19:31:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:21.522 19:31:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:21.522 19:31:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:21.522 19:31:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:21.522 19:31:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:21.522 19:31:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:21.522 19:31:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:21.522 19:31:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:21.522 19:31:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:21.522 19:31:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:21.522 19:31:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:21.522 19:31:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:21.522 19:31:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:21.781 19:31:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:21.781 19:31:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:21.781 19:31:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:21.781 19:31:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:21.781 19:31:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:21.781 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@300 -- # return 0 00:11:21.781 00:11:21.781 real 0m19.779s 00:11:21.781 user 1m12.970s 00:11:21.781 sys 0m10.094s 00:11:21.781 ************************************ 00:11:21.781 END TEST nvmf_target_multipath 00:11:21.781 ************************************ 00:11:21.781 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:21.781 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:21.781 19:31:15 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:11:21.781 19:31:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:21.781 19:31:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:21.781 19:31:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:21.781 ************************************ 00:11:21.781 START TEST nvmf_zcopy 00:11:21.781 ************************************ 00:11:21.781 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:11:21.781 * Looking for test storage... 00:11:21.781 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:21.781 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:21.781 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:11:21.781 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:22.088 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:22.088 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:22.088 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:22.088 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:22.088 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:11:22.088 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:11:22.088 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:11:22.088 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:11:22.088 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:11:22.088 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:11:22.088 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:11:22.088 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:22.088 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:11:22.088 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:11:22.088 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:22.088 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:22.088 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:11:22.088 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:11:22.088 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:22.089 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:11:22.089 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:11:22.089 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:11:22.089 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:11:22.089 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:22.089 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:11:22.089 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:11:22.089 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:22.089 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:22.089 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:11:22.089 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:22.089 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:22.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:22.089 --rc genhtml_branch_coverage=1 00:11:22.089 --rc genhtml_function_coverage=1 00:11:22.089 --rc genhtml_legend=1 00:11:22.089 --rc geninfo_all_blocks=1 00:11:22.089 --rc geninfo_unexecuted_blocks=1 00:11:22.089 00:11:22.089 ' 00:11:22.089 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:22.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:22.089 --rc genhtml_branch_coverage=1 00:11:22.089 --rc genhtml_function_coverage=1 00:11:22.089 --rc genhtml_legend=1 00:11:22.089 --rc geninfo_all_blocks=1 00:11:22.089 --rc geninfo_unexecuted_blocks=1 00:11:22.089 00:11:22.089 ' 00:11:22.089 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:22.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:22.089 --rc genhtml_branch_coverage=1 00:11:22.089 --rc genhtml_function_coverage=1 00:11:22.089 --rc genhtml_legend=1 00:11:22.089 --rc geninfo_all_blocks=1 00:11:22.089 --rc geninfo_unexecuted_blocks=1 00:11:22.089 00:11:22.089 ' 00:11:22.089 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:22.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:22.089 --rc genhtml_branch_coverage=1 00:11:22.089 --rc genhtml_function_coverage=1 00:11:22.089 --rc genhtml_legend=1 00:11:22.089 --rc geninfo_all_blocks=1 00:11:22.089 --rc geninfo_unexecuted_blocks=1 00:11:22.089 00:11:22.089 ' 00:11:22.089 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:22.089 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:11:22.089 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:22.089 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:22.089 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:22.089 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:22.089 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:22.089 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:22.089 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:22.089 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:22.089 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:22.089 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:22.089 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:11:22.089 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:11:22.089 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:22.089 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:22.089 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:22.089 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:22.089 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:22.089 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:11:22.089 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:22.089 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:22.089 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:22.089 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.089 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.089 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.089 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:11:22.089 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.089 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:11:22.089 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:22.089 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:22.089 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:22.089 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:22.089 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:22.089 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:22.089 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:22.089 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:22.089 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:22.089 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:22.089 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:11:22.089 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:22.089 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:22.089 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:22.089 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:22.089 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:22.089 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:22.089 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:22.089 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:22.089 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:22.089 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:22.089 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:22.089 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:22.089 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:22.089 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:22.089 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:22.089 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:22.089 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:22.089 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:22.089 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:22.089 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:22.089 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:22.089 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:22.089 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:22.089 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:22.089 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:22.089 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:22.089 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:22.089 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:22.089 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:22.089 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:22.089 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:22.089 Cannot find device "nvmf_init_br" 00:11:22.089 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:11:22.089 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:22.089 Cannot find device "nvmf_init_br2" 00:11:22.089 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:11:22.089 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:22.089 Cannot find device "nvmf_tgt_br" 00:11:22.089 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # true 00:11:22.089 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:22.089 Cannot find device "nvmf_tgt_br2" 00:11:22.089 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # true 00:11:22.089 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:22.089 Cannot find device "nvmf_init_br" 00:11:22.089 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # true 00:11:22.089 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:22.089 Cannot find device "nvmf_init_br2" 00:11:22.089 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # true 00:11:22.089 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:22.089 Cannot find device "nvmf_tgt_br" 00:11:22.089 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # true 00:11:22.089 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:22.089 Cannot find device "nvmf_tgt_br2" 00:11:22.089 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # true 00:11:22.089 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:22.089 Cannot find device "nvmf_br" 00:11:22.089 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # true 00:11:22.089 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:22.089 Cannot find device "nvmf_init_if" 00:11:22.089 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # true 00:11:22.089 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:22.089 Cannot find device "nvmf_init_if2" 00:11:22.089 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # true 00:11:22.089 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:22.089 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:22.089 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # true 00:11:22.089 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:22.089 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:22.089 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # true 00:11:22.089 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:22.089 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:22.089 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:22.089 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:22.089 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:22.089 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:22.349 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:22.349 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:22.349 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:22.349 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:22.349 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:22.349 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:22.349 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:22.349 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:22.349 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:22.349 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:22.349 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:22.349 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:22.349 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:22.349 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:22.349 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:22.349 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:22.349 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:22.349 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:22.349 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:22.349 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:22.349 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:22.349 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:22.349 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:22.349 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:22.349 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:22.349 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:22.349 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:22.349 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:22.349 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 00:11:22.349 00:11:22.349 --- 10.0.0.3 ping statistics --- 00:11:22.349 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:22.349 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:11:22.349 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:22.349 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:22.349 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.045 ms 00:11:22.349 00:11:22.349 --- 10.0.0.4 ping statistics --- 00:11:22.349 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:22.349 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:11:22.349 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:22.349 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:22.349 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.048 ms 00:11:22.349 00:11:22.349 --- 10.0.0.1 ping statistics --- 00:11:22.349 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:22.349 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:11:22.349 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:22.349 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:22.349 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.035 ms 00:11:22.349 00:11:22.349 --- 10.0.0.2 ping statistics --- 00:11:22.349 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:22.349 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:11:22.349 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:22.349 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@461 -- # return 0 00:11:22.349 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:22.349 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:22.349 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:22.349 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:22.349 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:22.349 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:22.349 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:22.349 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:11:22.349 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:22.349 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:22.349 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:22.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:22.349 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=65290 00:11:22.349 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 65290 00:11:22.349 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 65290 ']' 00:11:22.349 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:22.349 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:22.349 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:22.349 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:22.349 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:22.349 19:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:22.350 [2024-12-05 19:31:15.764934] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:11:22.350 [2024-12-05 19:31:15.765018] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:22.608 [2024-12-05 19:31:15.908824] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:22.608 [2024-12-05 19:31:15.987572] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:22.608 [2024-12-05 19:31:15.987654] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:22.608 [2024-12-05 19:31:15.987704] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:22.608 [2024-12-05 19:31:15.987717] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:22.608 [2024-12-05 19:31:15.987727] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:22.608 [2024-12-05 19:31:15.988191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:22.867 [2024-12-05 19:31:16.047444] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:23.435 19:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:23.435 19:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:11:23.435 19:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:23.436 19:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:23.436 19:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:23.436 19:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:23.436 19:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:11:23.436 19:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:11:23.436 19:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.436 19:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:23.436 [2024-12-05 19:31:16.774934] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:23.436 19:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.436 19:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:23.436 19:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.436 19:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:23.436 19:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.436 19:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:23.436 19:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.436 19:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:23.436 [2024-12-05 19:31:16.794992] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:23.436 19:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.436 19:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:11:23.436 19:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.436 19:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:23.436 19:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.436 19:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:11:23.436 19:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.436 19:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:23.436 malloc0 00:11:23.436 19:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.436 19:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:11:23.436 19:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.436 19:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:23.436 19:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.436 19:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:11:23.436 19:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:11:23.436 19:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:11:23.436 19:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:11:23.436 19:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:23.436 19:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:23.436 { 00:11:23.436 "params": { 00:11:23.436 "name": "Nvme$subsystem", 00:11:23.436 "trtype": "$TEST_TRANSPORT", 00:11:23.436 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:23.436 "adrfam": "ipv4", 00:11:23.436 "trsvcid": "$NVMF_PORT", 00:11:23.436 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:23.436 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:23.436 "hdgst": ${hdgst:-false}, 00:11:23.436 "ddgst": ${ddgst:-false} 00:11:23.436 }, 00:11:23.436 "method": "bdev_nvme_attach_controller" 00:11:23.436 } 00:11:23.436 EOF 00:11:23.436 )") 00:11:23.436 19:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:11:23.436 19:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:11:23.436 19:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:11:23.436 19:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:23.436 "params": { 00:11:23.436 "name": "Nvme1", 00:11:23.436 "trtype": "tcp", 00:11:23.436 "traddr": "10.0.0.3", 00:11:23.436 "adrfam": "ipv4", 00:11:23.436 "trsvcid": "4420", 00:11:23.436 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:23.436 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:23.436 "hdgst": false, 00:11:23.436 "ddgst": false 00:11:23.436 }, 00:11:23.436 "method": "bdev_nvme_attach_controller" 00:11:23.436 }' 00:11:23.695 [2024-12-05 19:31:16.919376] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:11:23.695 [2024-12-05 19:31:16.919538] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65323 ] 00:11:23.695 [2024-12-05 19:31:17.070989] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:23.953 [2024-12-05 19:31:17.131737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:23.953 [2024-12-05 19:31:17.202941] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:23.953 Running I/O for 10 seconds... 00:11:25.897 5823.00 IOPS, 45.49 MiB/s [2024-12-05T19:31:20.706Z] 5558.50 IOPS, 43.43 MiB/s [2024-12-05T19:31:21.643Z] 5432.00 IOPS, 42.44 MiB/s [2024-12-05T19:31:22.578Z] 5371.00 IOPS, 41.96 MiB/s [2024-12-05T19:31:23.515Z] 5434.40 IOPS, 42.46 MiB/s [2024-12-05T19:31:24.451Z] 5500.33 IOPS, 42.97 MiB/s [2024-12-05T19:31:25.388Z] 5535.43 IOPS, 43.25 MiB/s [2024-12-05T19:31:26.763Z] 5570.88 IOPS, 43.52 MiB/s [2024-12-05T19:31:27.389Z] 5597.56 IOPS, 43.73 MiB/s [2024-12-05T19:31:27.389Z] 5616.90 IOPS, 43.88 MiB/s 00:11:33.952 Latency(us) 00:11:33.952 [2024-12-05T19:31:27.389Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:33.952 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:11:33.952 Verification LBA range: start 0x0 length 0x1000 00:11:33.952 Nvme1n1 : 10.01 5617.68 43.89 0.00 0.00 22713.83 562.27 33125.47 00:11:33.952 [2024-12-05T19:31:27.389Z] =================================================================================================================== 00:11:33.952 [2024-12-05T19:31:27.389Z] Total : 5617.68 43.89 0.00 0.00 22713.83 562.27 33125.47 00:11:34.228 19:31:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=65445 00:11:34.228 19:31:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:11:34.228 19:31:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:34.228 19:31:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:11:34.228 19:31:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:11:34.228 19:31:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:11:34.228 19:31:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:11:34.228 19:31:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:34.228 19:31:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:34.228 { 00:11:34.228 "params": { 00:11:34.229 "name": "Nvme$subsystem", 00:11:34.229 "trtype": "$TEST_TRANSPORT", 00:11:34.229 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:34.229 "adrfam": "ipv4", 00:11:34.229 "trsvcid": "$NVMF_PORT", 00:11:34.229 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:34.229 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:34.229 "hdgst": ${hdgst:-false}, 00:11:34.229 "ddgst": ${ddgst:-false} 00:11:34.229 }, 00:11:34.229 "method": "bdev_nvme_attach_controller" 00:11:34.229 } 00:11:34.229 EOF 00:11:34.229 )") 00:11:34.229 19:31:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:11:34.229 [2024-12-05 19:31:27.552379] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.229 [2024-12-05 19:31:27.552426] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.229 19:31:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:11:34.229 19:31:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:11:34.229 19:31:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:34.229 "params": { 00:11:34.229 "name": "Nvme1", 00:11:34.229 "trtype": "tcp", 00:11:34.229 "traddr": "10.0.0.3", 00:11:34.229 "adrfam": "ipv4", 00:11:34.229 "trsvcid": "4420", 00:11:34.229 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:34.229 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:34.229 "hdgst": false, 00:11:34.229 "ddgst": false 00:11:34.229 }, 00:11:34.229 "method": "bdev_nvme_attach_controller" 00:11:34.229 }' 00:11:34.229 [2024-12-05 19:31:27.564336] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.229 [2024-12-05 19:31:27.564367] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.229 [2024-12-05 19:31:27.580335] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.229 [2024-12-05 19:31:27.580367] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.229 [2024-12-05 19:31:27.588346] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.229 [2024-12-05 19:31:27.588381] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.229 [2024-12-05 19:31:27.596347] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.229 [2024-12-05 19:31:27.596382] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.229 [2024-12-05 19:31:27.600637] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:11:34.229 [2024-12-05 19:31:27.600726] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65445 ] 00:11:34.229 [2024-12-05 19:31:27.608345] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.229 [2024-12-05 19:31:27.608375] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.229 [2024-12-05 19:31:27.620370] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.229 [2024-12-05 19:31:27.620404] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.229 [2024-12-05 19:31:27.632375] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.229 [2024-12-05 19:31:27.632412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.229 [2024-12-05 19:31:27.644365] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.229 [2024-12-05 19:31:27.644398] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.229 [2024-12-05 19:31:27.656364] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.229 [2024-12-05 19:31:27.656396] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.489 [2024-12-05 19:31:27.668376] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.489 [2024-12-05 19:31:27.668417] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.489 [2024-12-05 19:31:27.680380] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.489 [2024-12-05 19:31:27.680418] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.489 [2024-12-05 19:31:27.692372] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.489 [2024-12-05 19:31:27.692400] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.489 [2024-12-05 19:31:27.704374] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.489 [2024-12-05 19:31:27.704404] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.489 [2024-12-05 19:31:27.716417] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.489 [2024-12-05 19:31:27.716466] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.489 [2024-12-05 19:31:27.728422] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.489 [2024-12-05 19:31:27.728474] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.489 [2024-12-05 19:31:27.736414] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.489 [2024-12-05 19:31:27.736456] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.489 [2024-12-05 19:31:27.744411] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.489 [2024-12-05 19:31:27.744453] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.489 [2024-12-05 19:31:27.748964] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:34.489 [2024-12-05 19:31:27.752416] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.489 [2024-12-05 19:31:27.752452] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.489 [2024-12-05 19:31:27.764451] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.489 [2024-12-05 19:31:27.764513] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.489 [2024-12-05 19:31:27.772421] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.489 [2024-12-05 19:31:27.772457] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.489 [2024-12-05 19:31:27.780408] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.489 [2024-12-05 19:31:27.780442] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.489 [2024-12-05 19:31:27.788420] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.489 [2024-12-05 19:31:27.788458] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.489 [2024-12-05 19:31:27.796421] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.489 [2024-12-05 19:31:27.796460] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.489 [2024-12-05 19:31:27.804434] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.489 [2024-12-05 19:31:27.804477] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.489 [2024-12-05 19:31:27.811648] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:34.489 [2024-12-05 19:31:27.816433] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.489 [2024-12-05 19:31:27.816488] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.489 [2024-12-05 19:31:27.824426] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.489 [2024-12-05 19:31:27.824464] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.489 [2024-12-05 19:31:27.836461] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.489 [2024-12-05 19:31:27.836519] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.489 [2024-12-05 19:31:27.848446] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.489 [2024-12-05 19:31:27.848492] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.489 [2024-12-05 19:31:27.860448] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.489 [2024-12-05 19:31:27.860491] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.489 [2024-12-05 19:31:27.872451] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.489 [2024-12-05 19:31:27.872506] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.489 [2024-12-05 19:31:27.874494] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:34.489 [2024-12-05 19:31:27.884456] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.489 [2024-12-05 19:31:27.884512] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.489 [2024-12-05 19:31:27.896470] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.490 [2024-12-05 19:31:27.896533] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.490 [2024-12-05 19:31:27.908457] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.490 [2024-12-05 19:31:27.908507] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.490 [2024-12-05 19:31:27.920469] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.490 [2024-12-05 19:31:27.920524] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.749 [2024-12-05 19:31:27.932485] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.749 [2024-12-05 19:31:27.932533] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.749 [2024-12-05 19:31:27.944477] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.749 [2024-12-05 19:31:27.944513] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.749 [2024-12-05 19:31:27.956497] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.749 [2024-12-05 19:31:27.956535] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.749 [2024-12-05 19:31:27.968507] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.749 [2024-12-05 19:31:27.968551] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.749 [2024-12-05 19:31:27.980532] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.749 [2024-12-05 19:31:27.980581] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.749 Running I/O for 5 seconds... 00:11:34.749 [2024-12-05 19:31:27.992535] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.749 [2024-12-05 19:31:27.992578] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.749 [2024-12-05 19:31:28.009953] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.749 [2024-12-05 19:31:28.010270] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.749 [2024-12-05 19:31:28.027902] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.749 [2024-12-05 19:31:28.027957] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.749 [2024-12-05 19:31:28.042802] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.749 [2024-12-05 19:31:28.042856] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.749 [2024-12-05 19:31:28.052401] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.749 [2024-12-05 19:31:28.052458] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.749 [2024-12-05 19:31:28.068537] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.749 [2024-12-05 19:31:28.068595] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.749 [2024-12-05 19:31:28.086157] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.749 [2024-12-05 19:31:28.086217] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.749 [2024-12-05 19:31:28.101440] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.749 [2024-12-05 19:31:28.101717] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.749 [2024-12-05 19:31:28.111712] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.749 [2024-12-05 19:31:28.111754] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.749 [2024-12-05 19:31:28.123275] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.749 [2024-12-05 19:31:28.123313] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.749 [2024-12-05 19:31:28.134251] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.749 [2024-12-05 19:31:28.134303] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.749 [2024-12-05 19:31:28.147657] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.749 [2024-12-05 19:31:28.147716] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.749 [2024-12-05 19:31:28.163000] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.749 [2024-12-05 19:31:28.163065] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.749 [2024-12-05 19:31:28.179303] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.749 [2024-12-05 19:31:28.179357] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.009 [2024-12-05 19:31:28.197778] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.009 [2024-12-05 19:31:28.197845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.009 [2024-12-05 19:31:28.213811] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.009 [2024-12-05 19:31:28.213875] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.009 [2024-12-05 19:31:28.230594] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.009 [2024-12-05 19:31:28.230660] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.009 [2024-12-05 19:31:28.246454] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.009 [2024-12-05 19:31:28.246504] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.009 [2024-12-05 19:31:28.256615] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.009 [2024-12-05 19:31:28.256838] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.009 [2024-12-05 19:31:28.271092] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.009 [2024-12-05 19:31:28.271159] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.009 [2024-12-05 19:31:28.288031] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.009 [2024-12-05 19:31:28.288091] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.009 [2024-12-05 19:31:28.297938] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.009 [2024-12-05 19:31:28.297989] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.009 [2024-12-05 19:31:28.312785] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.009 [2024-12-05 19:31:28.312842] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.009 [2024-12-05 19:31:28.329155] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.009 [2024-12-05 19:31:28.329212] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.009 [2024-12-05 19:31:28.346265] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.009 [2024-12-05 19:31:28.346308] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.009 [2024-12-05 19:31:28.363776] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.009 [2024-12-05 19:31:28.363821] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.009 [2024-12-05 19:31:28.374397] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.009 [2024-12-05 19:31:28.374438] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.009 [2024-12-05 19:31:28.388982] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.009 [2024-12-05 19:31:28.389023] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.009 [2024-12-05 19:31:28.406371] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.009 [2024-12-05 19:31:28.406423] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.009 [2024-12-05 19:31:28.416494] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.009 [2024-12-05 19:31:28.416535] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.009 [2024-12-05 19:31:28.430903] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.009 [2024-12-05 19:31:28.430949] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.268 [2024-12-05 19:31:28.447924] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.268 [2024-12-05 19:31:28.447969] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.268 [2024-12-05 19:31:28.458386] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.268 [2024-12-05 19:31:28.458428] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.268 [2024-12-05 19:31:28.473372] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.268 [2024-12-05 19:31:28.473594] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.268 [2024-12-05 19:31:28.488359] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.268 [2024-12-05 19:31:28.488534] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.268 [2024-12-05 19:31:28.503985] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.268 [2024-12-05 19:31:28.504168] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.268 [2024-12-05 19:31:28.514249] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.268 [2024-12-05 19:31:28.514405] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.268 [2024-12-05 19:31:28.529648] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.268 [2024-12-05 19:31:28.529836] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.268 [2024-12-05 19:31:28.545541] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.268 [2024-12-05 19:31:28.545758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.268 [2024-12-05 19:31:28.561479] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.268 [2024-12-05 19:31:28.561678] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.269 [2024-12-05 19:31:28.571722] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.269 [2024-12-05 19:31:28.571875] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.269 [2024-12-05 19:31:28.586534] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.269 [2024-12-05 19:31:28.586684] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.269 [2024-12-05 19:31:28.597124] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.269 [2024-12-05 19:31:28.597274] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.269 [2024-12-05 19:31:28.610642] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.269 [2024-12-05 19:31:28.610794] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.269 [2024-12-05 19:31:28.619783] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.269 [2024-12-05 19:31:28.619934] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.269 [2024-12-05 19:31:28.636076] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.269 [2024-12-05 19:31:28.636243] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.269 [2024-12-05 19:31:28.647077] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.269 [2024-12-05 19:31:28.647266] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.269 [2024-12-05 19:31:28.661934] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.269 [2024-12-05 19:31:28.662099] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.269 [2024-12-05 19:31:28.679370] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.269 [2024-12-05 19:31:28.679552] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.269 [2024-12-05 19:31:28.695299] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.269 [2024-12-05 19:31:28.695473] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.528 [2024-12-05 19:31:28.705509] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.528 [2024-12-05 19:31:28.705689] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.528 [2024-12-05 19:31:28.717636] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.528 [2024-12-05 19:31:28.717799] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.528 [2024-12-05 19:31:28.728312] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.528 [2024-12-05 19:31:28.728461] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.528 [2024-12-05 19:31:28.743246] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.528 [2024-12-05 19:31:28.743431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.528 [2024-12-05 19:31:28.759565] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.528 [2024-12-05 19:31:28.759719] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.528 [2024-12-05 19:31:28.769368] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.528 [2024-12-05 19:31:28.769523] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.528 [2024-12-05 19:31:28.783293] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.528 [2024-12-05 19:31:28.783463] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.528 [2024-12-05 19:31:28.794727] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.528 [2024-12-05 19:31:28.794765] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.528 [2024-12-05 19:31:28.810910] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.528 [2024-12-05 19:31:28.810959] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.528 [2024-12-05 19:31:28.827779] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.528 [2024-12-05 19:31:28.827819] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.528 [2024-12-05 19:31:28.844577] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.528 [2024-12-05 19:31:28.844635] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.528 [2024-12-05 19:31:28.860324] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.528 [2024-12-05 19:31:28.860363] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.528 [2024-12-05 19:31:28.878098] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.528 [2024-12-05 19:31:28.878296] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.528 [2024-12-05 19:31:28.888228] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.528 [2024-12-05 19:31:28.888267] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.528 [2024-12-05 19:31:28.902835] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.528 [2024-12-05 19:31:28.902876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.528 [2024-12-05 19:31:28.919296] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.528 [2024-12-05 19:31:28.919343] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.528 [2024-12-05 19:31:28.936016] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.528 [2024-12-05 19:31:28.936058] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.528 [2024-12-05 19:31:28.945816] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.528 [2024-12-05 19:31:28.945997] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.528 [2024-12-05 19:31:28.960531] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.528 [2024-12-05 19:31:28.960737] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.787 [2024-12-05 19:31:28.976979] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.787 [2024-12-05 19:31:28.977022] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.787 [2024-12-05 19:31:28.992547] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.787 [2024-12-05 19:31:28.992592] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.787 11309.00 IOPS, 88.35 MiB/s [2024-12-05T19:31:29.224Z] [2024-12-05 19:31:29.002461] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.787 [2024-12-05 19:31:29.002523] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.787 [2024-12-05 19:31:29.014357] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.787 [2024-12-05 19:31:29.014397] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.787 [2024-12-05 19:31:29.025216] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.787 [2024-12-05 19:31:29.025253] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.787 [2024-12-05 19:31:29.040210] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.787 [2024-12-05 19:31:29.040249] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.787 [2024-12-05 19:31:29.057146] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.787 [2024-12-05 19:31:29.057183] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.787 [2024-12-05 19:31:29.071805] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.787 [2024-12-05 19:31:29.071998] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.787 [2024-12-05 19:31:29.087996] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.787 [2024-12-05 19:31:29.088047] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.787 [2024-12-05 19:31:29.098391] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.787 [2024-12-05 19:31:29.098545] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.787 [2024-12-05 19:31:29.110545] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.787 [2024-12-05 19:31:29.110694] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.787 [2024-12-05 19:31:29.125439] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.787 [2024-12-05 19:31:29.125592] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.787 [2024-12-05 19:31:29.135227] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.787 [2024-12-05 19:31:29.135390] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.787 [2024-12-05 19:31:29.148458] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.787 [2024-12-05 19:31:29.148638] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.787 [2024-12-05 19:31:29.159971] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.787 [2024-12-05 19:31:29.160179] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.787 [2024-12-05 19:31:29.175394] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.787 [2024-12-05 19:31:29.175573] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.788 [2024-12-05 19:31:29.185402] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.788 [2024-12-05 19:31:29.185546] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.788 [2024-12-05 19:31:29.197675] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.788 [2024-12-05 19:31:29.197830] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.788 [2024-12-05 19:31:29.209298] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.788 [2024-12-05 19:31:29.209466] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.788 [2024-12-05 19:31:29.220770] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.788 [2024-12-05 19:31:29.220922] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.047 [2024-12-05 19:31:29.232998] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.047 [2024-12-05 19:31:29.233194] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.047 [2024-12-05 19:31:29.249519] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.047 [2024-12-05 19:31:29.249685] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.047 [2024-12-05 19:31:29.259067] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.047 [2024-12-05 19:31:29.259269] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.047 [2024-12-05 19:31:29.271238] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.047 [2024-12-05 19:31:29.271393] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.047 [2024-12-05 19:31:29.282322] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.047 [2024-12-05 19:31:29.282475] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.047 [2024-12-05 19:31:29.293700] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.047 [2024-12-05 19:31:29.293863] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.047 [2024-12-05 19:31:29.308721] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.047 [2024-12-05 19:31:29.308888] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.047 [2024-12-05 19:31:29.319872] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.047 [2024-12-05 19:31:29.320027] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.047 [2024-12-05 19:31:29.335080] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.047 [2024-12-05 19:31:29.335278] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.047 [2024-12-05 19:31:29.351977] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.047 [2024-12-05 19:31:29.352172] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.047 [2024-12-05 19:31:29.367789] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.047 [2024-12-05 19:31:29.367946] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.047 [2024-12-05 19:31:29.384539] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.047 [2024-12-05 19:31:29.384706] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.047 [2024-12-05 19:31:29.402523] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.047 [2024-12-05 19:31:29.402689] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.047 [2024-12-05 19:31:29.412969] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.047 [2024-12-05 19:31:29.413007] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.047 [2024-12-05 19:31:29.427746] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.047 [2024-12-05 19:31:29.427785] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.047 [2024-12-05 19:31:29.444662] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.047 [2024-12-05 19:31:29.444702] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.047 [2024-12-05 19:31:29.454309] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.047 [2024-12-05 19:31:29.454345] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.047 [2024-12-05 19:31:29.470200] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.047 [2024-12-05 19:31:29.470239] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.307 [2024-12-05 19:31:29.487448] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.307 [2024-12-05 19:31:29.487490] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.307 [2024-12-05 19:31:29.497802] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.307 [2024-12-05 19:31:29.497999] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.307 [2024-12-05 19:31:29.509747] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.307 [2024-12-05 19:31:29.509901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.307 [2024-12-05 19:31:29.525054] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.307 [2024-12-05 19:31:29.525328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.307 [2024-12-05 19:31:29.541623] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.307 [2024-12-05 19:31:29.541780] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.307 [2024-12-05 19:31:29.551806] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.307 [2024-12-05 19:31:29.551967] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.307 [2024-12-05 19:31:29.566378] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.307 [2024-12-05 19:31:29.566536] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.307 [2024-12-05 19:31:29.576798] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.307 [2024-12-05 19:31:29.576947] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.307 [2024-12-05 19:31:29.591549] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.307 [2024-12-05 19:31:29.591703] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.307 [2024-12-05 19:31:29.607837] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.307 [2024-12-05 19:31:29.607991] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.307 [2024-12-05 19:31:29.625700] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.307 [2024-12-05 19:31:29.625855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.307 [2024-12-05 19:31:29.640788] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.307 [2024-12-05 19:31:29.640940] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.307 [2024-12-05 19:31:29.650542] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.307 [2024-12-05 19:31:29.650688] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.307 [2024-12-05 19:31:29.662601] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.307 [2024-12-05 19:31:29.662770] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.307 [2024-12-05 19:31:29.678650] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.307 [2024-12-05 19:31:29.678802] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.307 [2024-12-05 19:31:29.695546] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.307 [2024-12-05 19:31:29.695700] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.307 [2024-12-05 19:31:29.712216] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.307 [2024-12-05 19:31:29.712383] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.307 [2024-12-05 19:31:29.722457] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.307 [2024-12-05 19:31:29.722609] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.307 [2024-12-05 19:31:29.736849] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.307 [2024-12-05 19:31:29.737005] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.569 [2024-12-05 19:31:29.747053] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.569 [2024-12-05 19:31:29.747220] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.569 [2024-12-05 19:31:29.761763] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.569 [2024-12-05 19:31:29.761943] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.569 [2024-12-05 19:31:29.772319] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.569 [2024-12-05 19:31:29.772356] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.569 [2024-12-05 19:31:29.787057] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.569 [2024-12-05 19:31:29.787097] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.569 [2024-12-05 19:31:29.806962] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.569 [2024-12-05 19:31:29.807146] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.569 [2024-12-05 19:31:29.817825] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.569 [2024-12-05 19:31:29.817874] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.569 [2024-12-05 19:31:29.834619] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.569 [2024-12-05 19:31:29.834788] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.569 [2024-12-05 19:31:29.851236] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.569 [2024-12-05 19:31:29.851275] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.569 [2024-12-05 19:31:29.861042] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.569 [2024-12-05 19:31:29.861081] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.569 [2024-12-05 19:31:29.875988] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.569 [2024-12-05 19:31:29.876028] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.569 [2024-12-05 19:31:29.887693] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.569 [2024-12-05 19:31:29.887738] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.569 [2024-12-05 19:31:29.902246] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.569 [2024-12-05 19:31:29.902286] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.569 [2024-12-05 19:31:29.919144] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.569 [2024-12-05 19:31:29.919183] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.569 [2024-12-05 19:31:29.929619] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.569 [2024-12-05 19:31:29.929778] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.569 [2024-12-05 19:31:29.944498] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.569 [2024-12-05 19:31:29.944656] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.569 [2024-12-05 19:31:29.961670] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.569 [2024-12-05 19:31:29.961712] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.569 [2024-12-05 19:31:29.971677] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.569 [2024-12-05 19:31:29.971852] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.569 [2024-12-05 19:31:29.986720] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.569 [2024-12-05 19:31:29.986880] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.569 11350.50 IOPS, 88.68 MiB/s [2024-12-05T19:31:30.006Z] [2024-12-05 19:31:30.002979] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.569 [2024-12-05 19:31:30.003020] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.828 [2024-12-05 19:31:30.020361] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.828 [2024-12-05 19:31:30.020402] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.828 [2024-12-05 19:31:30.035665] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.828 [2024-12-05 19:31:30.035706] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.828 [2024-12-05 19:31:30.045154] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.828 [2024-12-05 19:31:30.045191] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.828 [2024-12-05 19:31:30.061162] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.828 [2024-12-05 19:31:30.061200] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.828 [2024-12-05 19:31:30.071377] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.828 [2024-12-05 19:31:30.071424] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.828 [2024-12-05 19:31:30.085850] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.828 [2024-12-05 19:31:30.085891] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.828 [2024-12-05 19:31:30.102097] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.828 [2024-12-05 19:31:30.102148] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.828 [2024-12-05 19:31:30.111997] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.828 [2024-12-05 19:31:30.112036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.828 [2024-12-05 19:31:30.127433] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.828 [2024-12-05 19:31:30.127602] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.828 [2024-12-05 19:31:30.144561] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.828 [2024-12-05 19:31:30.144602] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.828 [2024-12-05 19:31:30.160911] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.828 [2024-12-05 19:31:30.160951] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.828 [2024-12-05 19:31:30.178587] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.828 [2024-12-05 19:31:30.178760] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.828 [2024-12-05 19:31:30.194180] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.828 [2024-12-05 19:31:30.194346] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.828 [2024-12-05 19:31:30.204441] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.828 [2024-12-05 19:31:30.204595] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.828 [2024-12-05 19:31:30.216751] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.828 [2024-12-05 19:31:30.216905] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.828 [2024-12-05 19:31:30.232045] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.828 [2024-12-05 19:31:30.232223] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.828 [2024-12-05 19:31:30.242155] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.828 [2024-12-05 19:31:30.242319] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.828 [2024-12-05 19:31:30.257637] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.828 [2024-12-05 19:31:30.257801] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.087 [2024-12-05 19:31:30.273620] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.087 [2024-12-05 19:31:30.273775] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.087 [2024-12-05 19:31:30.291013] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.087 [2024-12-05 19:31:30.291197] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.087 [2024-12-05 19:31:30.306985] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.087 [2024-12-05 19:31:30.307142] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.087 [2024-12-05 19:31:30.323691] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.087 [2024-12-05 19:31:30.323844] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.087 [2024-12-05 19:31:30.333768] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.087 [2024-12-05 19:31:30.333934] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.087 [2024-12-05 19:31:30.348690] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.087 [2024-12-05 19:31:30.348844] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.087 [2024-12-05 19:31:30.364414] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.087 [2024-12-05 19:31:30.364642] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.087 [2024-12-05 19:31:30.380030] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.087 [2024-12-05 19:31:30.380289] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.087 [2024-12-05 19:31:30.389860] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.087 [2024-12-05 19:31:30.390052] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.087 [2024-12-05 19:31:30.406251] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.087 [2024-12-05 19:31:30.406513] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.087 [2024-12-05 19:31:30.421971] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.087 [2024-12-05 19:31:30.422178] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.087 [2024-12-05 19:31:30.438700] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.087 [2024-12-05 19:31:30.438852] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.087 [2024-12-05 19:31:30.455225] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.087 [2024-12-05 19:31:30.455377] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.087 [2024-12-05 19:31:30.465142] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.087 [2024-12-05 19:31:30.465291] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.087 [2024-12-05 19:31:30.477281] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.087 [2024-12-05 19:31:30.477318] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.087 [2024-12-05 19:31:30.488499] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.087 [2024-12-05 19:31:30.488539] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.087 [2024-12-05 19:31:30.505395] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.087 [2024-12-05 19:31:30.505434] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.346 [2024-12-05 19:31:30.524174] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.346 [2024-12-05 19:31:30.524212] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.346 [2024-12-05 19:31:30.534834] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.346 [2024-12-05 19:31:30.534871] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.346 [2024-12-05 19:31:30.548050] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.346 [2024-12-05 19:31:30.548240] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.346 [2024-12-05 19:31:30.564762] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.346 [2024-12-05 19:31:30.564802] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.346 [2024-12-05 19:31:30.573989] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.346 [2024-12-05 19:31:30.574028] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.346 [2024-12-05 19:31:30.587052] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.346 [2024-12-05 19:31:30.587097] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.346 [2024-12-05 19:31:30.598177] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.346 [2024-12-05 19:31:30.598215] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.346 [2024-12-05 19:31:30.610737] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.346 [2024-12-05 19:31:30.610776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.346 [2024-12-05 19:31:30.628084] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.346 [2024-12-05 19:31:30.628256] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.346 [2024-12-05 19:31:30.642909] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.346 [2024-12-05 19:31:30.642948] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.346 [2024-12-05 19:31:30.659244] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.346 [2024-12-05 19:31:30.659281] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.346 [2024-12-05 19:31:30.675811] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.346 [2024-12-05 19:31:30.675850] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.346 [2024-12-05 19:31:30.692315] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.346 [2024-12-05 19:31:30.692351] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.346 [2024-12-05 19:31:30.702528] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.346 [2024-12-05 19:31:30.702680] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.346 [2024-12-05 19:31:30.717050] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.346 [2024-12-05 19:31:30.717090] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.346 [2024-12-05 19:31:30.726866] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.346 [2024-12-05 19:31:30.726904] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.346 [2024-12-05 19:31:30.741409] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.346 [2024-12-05 19:31:30.741567] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.346 [2024-12-05 19:31:30.757631] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.346 [2024-12-05 19:31:30.757679] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.346 [2024-12-05 19:31:30.767268] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.346 [2024-12-05 19:31:30.767318] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.604 [2024-12-05 19:31:30.783717] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.604 [2024-12-05 19:31:30.783776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.604 [2024-12-05 19:31:30.794387] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.604 [2024-12-05 19:31:30.794431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.604 [2024-12-05 19:31:30.809069] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.604 [2024-12-05 19:31:30.809314] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.604 [2024-12-05 19:31:30.824390] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.604 [2024-12-05 19:31:30.824586] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.604 [2024-12-05 19:31:30.834336] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.604 [2024-12-05 19:31:30.834375] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.604 [2024-12-05 19:31:30.846471] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.604 [2024-12-05 19:31:30.846510] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.604 [2024-12-05 19:31:30.861439] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.604 [2024-12-05 19:31:30.861476] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.604 [2024-12-05 19:31:30.871931] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.604 [2024-12-05 19:31:30.872107] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.604 [2024-12-05 19:31:30.886627] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.604 [2024-12-05 19:31:30.886689] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.604 [2024-12-05 19:31:30.903652] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.604 [2024-12-05 19:31:30.903805] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.604 [2024-12-05 19:31:30.921540] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.604 [2024-12-05 19:31:30.921577] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.604 [2024-12-05 19:31:30.936359] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.604 [2024-12-05 19:31:30.936396] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.604 [2024-12-05 19:31:30.945976] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.604 [2024-12-05 19:31:30.946014] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.604 [2024-12-05 19:31:30.963589] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.604 [2024-12-05 19:31:30.963646] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.604 [2024-12-05 19:31:30.979132] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.604 [2024-12-05 19:31:30.979172] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.604 [2024-12-05 19:31:30.988290] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.604 [2024-12-05 19:31:30.988328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.604 11377.33 IOPS, 88.89 MiB/s [2024-12-05T19:31:31.041Z] [2024-12-05 19:31:31.003689] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.604 [2024-12-05 19:31:31.003739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.604 [2024-12-05 19:31:31.022162] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.604 [2024-12-05 19:31:31.022215] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.604 [2024-12-05 19:31:31.037417] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.604 [2024-12-05 19:31:31.037653] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.861 [2024-12-05 19:31:31.047483] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.861 [2024-12-05 19:31:31.047522] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.861 [2024-12-05 19:31:31.058855] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.861 [2024-12-05 19:31:31.058894] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.861 [2024-12-05 19:31:31.075727] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.861 [2024-12-05 19:31:31.075767] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.861 [2024-12-05 19:31:31.086157] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.861 [2024-12-05 19:31:31.086195] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.862 [2024-12-05 19:31:31.097402] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.862 [2024-12-05 19:31:31.097440] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.862 [2024-12-05 19:31:31.108252] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.862 [2024-12-05 19:31:31.108289] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.862 [2024-12-05 19:31:31.123360] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.862 [2024-12-05 19:31:31.123535] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.862 [2024-12-05 19:31:31.140803] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.862 [2024-12-05 19:31:31.140840] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.862 [2024-12-05 19:31:31.151300] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.862 [2024-12-05 19:31:31.151337] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.862 [2024-12-05 19:31:31.162511] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.862 [2024-12-05 19:31:31.162679] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.862 [2024-12-05 19:31:31.174015] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.862 [2024-12-05 19:31:31.174178] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.862 [2024-12-05 19:31:31.191649] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.862 [2024-12-05 19:31:31.191802] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.862 [2024-12-05 19:31:31.207059] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.862 [2024-12-05 19:31:31.207254] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.862 [2024-12-05 19:31:31.217050] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.862 [2024-12-05 19:31:31.217234] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.862 [2024-12-05 19:31:31.232425] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.862 [2024-12-05 19:31:31.232578] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.862 [2024-12-05 19:31:31.248619] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.862 [2024-12-05 19:31:31.248769] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.862 [2024-12-05 19:31:31.258683] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.862 [2024-12-05 19:31:31.258832] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.862 [2024-12-05 19:31:31.273604] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.862 [2024-12-05 19:31:31.273820] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.862 [2024-12-05 19:31:31.290007] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.862 [2024-12-05 19:31:31.290259] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.119 [2024-12-05 19:31:31.299879] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.119 [2024-12-05 19:31:31.300065] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.119 [2024-12-05 19:31:31.312010] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.119 [2024-12-05 19:31:31.312260] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.119 [2024-12-05 19:31:31.323079] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.119 [2024-12-05 19:31:31.323260] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.119 [2024-12-05 19:31:31.336488] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.119 [2024-12-05 19:31:31.336642] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.119 [2024-12-05 19:31:31.346817] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.119 [2024-12-05 19:31:31.346963] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.119 [2024-12-05 19:31:31.361956] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.119 [2024-12-05 19:31:31.362122] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.119 [2024-12-05 19:31:31.378230] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.119 [2024-12-05 19:31:31.378378] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.119 [2024-12-05 19:31:31.388350] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.119 [2024-12-05 19:31:31.388498] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.119 [2024-12-05 19:31:31.403085] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.119 [2024-12-05 19:31:31.403259] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.119 [2024-12-05 19:31:31.419906] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.119 [2024-12-05 19:31:31.420060] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.119 [2024-12-05 19:31:31.430519] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.119 [2024-12-05 19:31:31.430674] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.119 [2024-12-05 19:31:31.442516] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.119 [2024-12-05 19:31:31.442556] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.120 [2024-12-05 19:31:31.453245] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.120 [2024-12-05 19:31:31.453283] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.120 [2024-12-05 19:31:31.471471] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.120 [2024-12-05 19:31:31.471510] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.120 [2024-12-05 19:31:31.486722] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.120 [2024-12-05 19:31:31.486765] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.120 [2024-12-05 19:31:31.495608] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.120 [2024-12-05 19:31:31.495647] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.120 [2024-12-05 19:31:31.512120] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.120 [2024-12-05 19:31:31.512177] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.120 [2024-12-05 19:31:31.522329] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.120 [2024-12-05 19:31:31.522367] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.120 [2024-12-05 19:31:31.537482] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.120 [2024-12-05 19:31:31.537521] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.120 [2024-12-05 19:31:31.552469] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.120 [2024-12-05 19:31:31.552646] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.378 [2024-12-05 19:31:31.563011] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.378 [2024-12-05 19:31:31.563057] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.378 [2024-12-05 19:31:31.575086] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.378 [2024-12-05 19:31:31.575156] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.378 [2024-12-05 19:31:31.589796] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.378 [2024-12-05 19:31:31.589839] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.378 [2024-12-05 19:31:31.599772] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.378 [2024-12-05 19:31:31.599931] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.378 [2024-12-05 19:31:31.611599] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.378 [2024-12-05 19:31:31.611751] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.378 [2024-12-05 19:31:31.622680] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.378 [2024-12-05 19:31:31.622833] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.378 [2024-12-05 19:31:31.637831] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.378 [2024-12-05 19:31:31.638019] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.378 [2024-12-05 19:31:31.656611] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.378 [2024-12-05 19:31:31.656795] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.378 [2024-12-05 19:31:31.671845] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.378 [2024-12-05 19:31:31.672081] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.378 [2024-12-05 19:31:31.681769] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.378 [2024-12-05 19:31:31.681974] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.378 [2024-12-05 19:31:31.697675] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.378 [2024-12-05 19:31:31.697840] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.378 [2024-12-05 19:31:31.713665] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.378 [2024-12-05 19:31:31.713821] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.378 [2024-12-05 19:31:31.732337] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.378 [2024-12-05 19:31:31.732485] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.378 [2024-12-05 19:31:31.747558] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.378 [2024-12-05 19:31:31.747710] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.378 [2024-12-05 19:31:31.756837] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.378 [2024-12-05 19:31:31.756984] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.378 [2024-12-05 19:31:31.768645] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.378 [2024-12-05 19:31:31.768796] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.378 [2024-12-05 19:31:31.779924] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.378 [2024-12-05 19:31:31.780074] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.378 [2024-12-05 19:31:31.793275] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.378 [2024-12-05 19:31:31.793424] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.378 [2024-12-05 19:31:31.809458] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.378 [2024-12-05 19:31:31.809611] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.699 [2024-12-05 19:31:31.827210] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.699 [2024-12-05 19:31:31.827362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.699 [2024-12-05 19:31:31.837652] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.699 [2024-12-05 19:31:31.837802] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.699 [2024-12-05 19:31:31.852937] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.699 [2024-12-05 19:31:31.853089] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.699 [2024-12-05 19:31:31.869259] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.699 [2024-12-05 19:31:31.869429] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.699 [2024-12-05 19:31:31.879570] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.699 [2024-12-05 19:31:31.879715] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.699 [2024-12-05 19:31:31.894397] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.699 [2024-12-05 19:31:31.894570] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.699 [2024-12-05 19:31:31.910948] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.699 [2024-12-05 19:31:31.910998] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.699 [2024-12-05 19:31:31.920357] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.699 [2024-12-05 19:31:31.920395] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.699 [2024-12-05 19:31:31.931944] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.699 [2024-12-05 19:31:31.931981] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.699 [2024-12-05 19:31:31.946822] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.699 [2024-12-05 19:31:31.946863] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.699 [2024-12-05 19:31:31.961601] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.699 [2024-12-05 19:31:31.961808] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.699 [2024-12-05 19:31:31.978187] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.699 [2024-12-05 19:31:31.978233] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.699 [2024-12-05 19:31:31.988076] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.699 [2024-12-05 19:31:31.988146] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.699 11389.25 IOPS, 88.98 MiB/s [2024-12-05T19:31:32.136Z] [2024-12-05 19:31:32.003083] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.699 [2024-12-05 19:31:32.003285] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.699 [2024-12-05 19:31:32.018261] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.699 [2024-12-05 19:31:32.018416] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.699 [2024-12-05 19:31:32.028055] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.699 [2024-12-05 19:31:32.028275] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.699 [2024-12-05 19:31:32.039919] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.699 [2024-12-05 19:31:32.040177] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.699 [2024-12-05 19:31:32.056089] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.699 [2024-12-05 19:31:32.056378] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.699 [2024-12-05 19:31:32.073969] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.699 [2024-12-05 19:31:32.074204] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.699 [2024-12-05 19:31:32.086078] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.699 [2024-12-05 19:31:32.086264] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.699 [2024-12-05 19:31:32.100808] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.699 [2024-12-05 19:31:32.100978] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.957 [2024-12-05 19:31:32.116052] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.957 [2024-12-05 19:31:32.116236] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.957 [2024-12-05 19:31:32.126110] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.957 [2024-12-05 19:31:32.126182] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.957 [2024-12-05 19:31:32.138141] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.957 [2024-12-05 19:31:32.138188] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.957 [2024-12-05 19:31:32.153094] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.957 [2024-12-05 19:31:32.153152] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.957 [2024-12-05 19:31:32.170391] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.957 [2024-12-05 19:31:32.170446] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.957 [2024-12-05 19:31:32.186676] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.957 [2024-12-05 19:31:32.186730] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.957 [2024-12-05 19:31:32.204964] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.957 [2024-12-05 19:31:32.205022] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.957 [2024-12-05 19:31:32.215992] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.957 [2024-12-05 19:31:32.216248] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.957 [2024-12-05 19:31:32.227642] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.957 [2024-12-05 19:31:32.227796] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.957 [2024-12-05 19:31:32.240908] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.957 [2024-12-05 19:31:32.240950] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.957 [2024-12-05 19:31:32.252317] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.957 [2024-12-05 19:31:32.252359] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.957 [2024-12-05 19:31:32.262791] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.957 [2024-12-05 19:31:32.262947] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.957 [2024-12-05 19:31:32.278435] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.957 [2024-12-05 19:31:32.278477] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.957 [2024-12-05 19:31:32.294195] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.957 [2024-12-05 19:31:32.294243] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.957 [2024-12-05 19:31:32.303440] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.957 [2024-12-05 19:31:32.303481] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.957 [2024-12-05 19:31:32.315440] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.957 [2024-12-05 19:31:32.315480] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.957 [2024-12-05 19:31:32.326361] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.957 [2024-12-05 19:31:32.326399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.957 [2024-12-05 19:31:32.337780] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.957 [2024-12-05 19:31:32.337961] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.957 [2024-12-05 19:31:32.348766] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.957 [2024-12-05 19:31:32.348957] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.957 [2024-12-05 19:31:32.362871] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.957 [2024-12-05 19:31:32.362909] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.957 [2024-12-05 19:31:32.378427] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.957 [2024-12-05 19:31:32.378466] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.957 [2024-12-05 19:31:32.388000] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.957 [2024-12-05 19:31:32.388201] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.215 [2024-12-05 19:31:32.399980] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.215 [2024-12-05 19:31:32.400177] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.215 [2024-12-05 19:31:32.411158] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.215 [2024-12-05 19:31:32.411209] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.215 [2024-12-05 19:31:32.422571] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.215 [2024-12-05 19:31:32.422752] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.215 [2024-12-05 19:31:32.433893] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.215 [2024-12-05 19:31:32.434088] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.215 [2024-12-05 19:31:32.445759] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.215 [2024-12-05 19:31:32.445964] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.215 [2024-12-05 19:31:32.460908] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.215 [2024-12-05 19:31:32.461166] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.215 [2024-12-05 19:31:32.477507] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.215 [2024-12-05 19:31:32.477576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.215 [2024-12-05 19:31:32.493846] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.215 [2024-12-05 19:31:32.493884] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.215 [2024-12-05 19:31:32.510175] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.215 [2024-12-05 19:31:32.510285] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.215 [2024-12-05 19:31:32.527633] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.215 [2024-12-05 19:31:32.527682] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.215 [2024-12-05 19:31:32.544507] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.215 [2024-12-05 19:31:32.544724] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.215 [2024-12-05 19:31:32.554990] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.215 [2024-12-05 19:31:32.555216] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.215 [2024-12-05 19:31:32.569829] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.215 [2024-12-05 19:31:32.570009] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.215 [2024-12-05 19:31:32.579824] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.215 [2024-12-05 19:31:32.580004] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.215 [2024-12-05 19:31:32.595949] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.215 [2024-12-05 19:31:32.596191] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.215 [2024-12-05 19:31:32.612772] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.215 [2024-12-05 19:31:32.612977] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.215 [2024-12-05 19:31:32.624435] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.215 [2024-12-05 19:31:32.624662] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.215 [2024-12-05 19:31:32.639540] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.215 [2024-12-05 19:31:32.639698] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.215 [2024-12-05 19:31:32.650094] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.215 [2024-12-05 19:31:32.650301] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.472 [2024-12-05 19:31:32.663223] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.472 [2024-12-05 19:31:32.663427] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.472 [2024-12-05 19:31:32.677982] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.472 [2024-12-05 19:31:32.678205] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.472 [2024-12-05 19:31:32.688634] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.472 [2024-12-05 19:31:32.688795] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.472 [2024-12-05 19:31:32.700813] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.472 [2024-12-05 19:31:32.700968] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.472 [2024-12-05 19:31:32.715833] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.472 [2024-12-05 19:31:32.716031] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.472 [2024-12-05 19:31:32.726043] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.472 [2024-12-05 19:31:32.726234] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.472 [2024-12-05 19:31:32.741259] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.472 [2024-12-05 19:31:32.741426] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.472 [2024-12-05 19:31:32.752018] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.472 [2024-12-05 19:31:32.752215] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.472 [2024-12-05 19:31:32.763368] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.472 [2024-12-05 19:31:32.763531] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.472 [2024-12-05 19:31:32.776061] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.472 [2024-12-05 19:31:32.776321] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.472 [2024-12-05 19:31:32.790818] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.472 [2024-12-05 19:31:32.790859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.472 [2024-12-05 19:31:32.806563] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.472 [2024-12-05 19:31:32.806609] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.472 [2024-12-05 19:31:32.816406] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.472 [2024-12-05 19:31:32.816448] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.472 [2024-12-05 19:31:32.828830] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.472 [2024-12-05 19:31:32.828877] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.472 [2024-12-05 19:31:32.840331] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.472 [2024-12-05 19:31:32.840370] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.472 [2024-12-05 19:31:32.851951] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.472 [2024-12-05 19:31:32.851991] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.472 [2024-12-05 19:31:32.864362] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.472 [2024-12-05 19:31:32.864403] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.473 [2024-12-05 19:31:32.880581] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.473 [2024-12-05 19:31:32.880656] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.473 [2024-12-05 19:31:32.897495] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.473 [2024-12-05 19:31:32.897567] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.730 [2024-12-05 19:31:32.911774] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.730 [2024-12-05 19:31:32.912833] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.730 [2024-12-05 19:31:32.928819] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.730 [2024-12-05 19:31:32.928887] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.730 [2024-12-05 19:31:32.944361] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.730 [2024-12-05 19:31:32.944417] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.730 [2024-12-05 19:31:32.954571] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.730 [2024-12-05 19:31:32.954617] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.730 [2024-12-05 19:31:32.967062] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.730 [2024-12-05 19:31:32.967107] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.730 [2024-12-05 19:31:32.981497] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.730 [2024-12-05 19:31:32.981572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.730 [2024-12-05 19:31:32.991609] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.730 [2024-12-05 19:31:32.991863] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.730 11282.20 IOPS, 88.14 MiB/s [2024-12-05T19:31:33.167Z] [2024-12-05 19:31:33.002404] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.730 [2024-12-05 19:31:33.002614] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.730 00:11:39.730 Latency(us) 00:11:39.730 [2024-12-05T19:31:33.167Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:39.730 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:11:39.730 Nvme1n1 : 5.01 11285.07 88.16 0.00 0.00 11328.01 2561.86 26571.87 00:11:39.730 [2024-12-05T19:31:33.167Z] =================================================================================================================== 00:11:39.730 [2024-12-05T19:31:33.167Z] Total : 11285.07 88.16 0.00 0.00 11328.01 2561.86 26571.87 00:11:39.730 [2024-12-05 19:31:33.009783] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.730 [2024-12-05 19:31:33.009975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.730 [2024-12-05 19:31:33.017777] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.730 [2024-12-05 19:31:33.017957] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.730 [2024-12-05 19:31:33.025777] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.730 [2024-12-05 19:31:33.025939] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.730 [2024-12-05 19:31:33.033789] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.730 [2024-12-05 19:31:33.034042] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.730 [2024-12-05 19:31:33.041797] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.730 [2024-12-05 19:31:33.041991] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.730 [2024-12-05 19:31:33.049804] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.731 [2024-12-05 19:31:33.050015] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.731 [2024-12-05 19:31:33.061803] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.731 [2024-12-05 19:31:33.062078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.731 [2024-12-05 19:31:33.069795] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.731 [2024-12-05 19:31:33.069996] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.731 [2024-12-05 19:31:33.081813] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.731 [2024-12-05 19:31:33.082076] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.731 [2024-12-05 19:31:33.093825] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.731 [2024-12-05 19:31:33.094132] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.731 [2024-12-05 19:31:33.105816] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.731 [2024-12-05 19:31:33.106094] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.731 [2024-12-05 19:31:33.117822] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.731 [2024-12-05 19:31:33.118099] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.731 [2024-12-05 19:31:33.125815] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.731 [2024-12-05 19:31:33.126003] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.731 [2024-12-05 19:31:33.137836] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.731 [2024-12-05 19:31:33.138130] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.731 [2024-12-05 19:31:33.145813] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.731 [2024-12-05 19:31:33.145990] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.731 [2024-12-05 19:31:33.157819] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.731 [2024-12-05 19:31:33.158028] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.731 [2024-12-05 19:31:33.165819] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.731 [2024-12-05 19:31:33.165994] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.988 [2024-12-05 19:31:33.177838] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.989 [2024-12-05 19:31:33.178145] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.989 [2024-12-05 19:31:33.189818] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.989 [2024-12-05 19:31:33.189851] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.989 [2024-12-05 19:31:33.197829] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.989 [2024-12-05 19:31:33.197999] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.989 [2024-12-05 19:31:33.209813] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.989 [2024-12-05 19:31:33.209844] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.989 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (65445) - No such process 00:11:39.989 19:31:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 65445 00:11:39.989 19:31:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:39.989 19:31:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.989 19:31:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:39.989 19:31:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.989 19:31:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:39.989 19:31:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.989 19:31:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:39.989 delay0 00:11:39.989 19:31:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.989 19:31:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:11:39.989 19:31:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.989 19:31:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:39.989 19:31:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.989 19:31:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 ns:1' 00:11:39.989 [2024-12-05 19:31:33.421300] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:11:46.568 Initializing NVMe Controllers 00:11:46.568 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:11:46.568 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:46.568 Initialization complete. Launching workers. 00:11:46.568 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 51 00:11:46.568 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 338, failed to submit 33 00:11:46.568 success 195, unsuccessful 143, failed 0 00:11:46.568 19:31:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:11:46.568 19:31:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:11:46.568 19:31:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:46.568 19:31:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:11:46.569 19:31:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:46.569 19:31:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:11:46.569 19:31:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:46.569 19:31:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:46.569 rmmod nvme_tcp 00:11:46.569 rmmod nvme_fabrics 00:11:46.569 rmmod nvme_keyring 00:11:46.569 19:31:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:46.569 19:31:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:11:46.569 19:31:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:11:46.569 19:31:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 65290 ']' 00:11:46.569 19:31:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 65290 00:11:46.569 19:31:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 65290 ']' 00:11:46.569 19:31:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 65290 00:11:46.569 19:31:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:11:46.569 19:31:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:46.569 19:31:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65290 00:11:46.569 19:31:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:11:46.569 killing process with pid 65290 00:11:46.569 19:31:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:11:46.569 19:31:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65290' 00:11:46.569 19:31:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 65290 00:11:46.569 19:31:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 65290 00:11:46.569 19:31:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:46.569 19:31:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:46.569 19:31:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:46.569 19:31:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:11:46.569 19:31:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:46.569 19:31:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:11:46.569 19:31:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:11:46.569 19:31:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:46.569 19:31:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:46.569 19:31:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:46.569 19:31:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:46.569 19:31:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:46.569 19:31:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:46.569 19:31:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:46.569 19:31:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:46.569 19:31:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:46.569 19:31:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:46.569 19:31:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:46.569 19:31:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:46.569 19:31:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:46.569 19:31:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:46.828 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:46.828 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:46.828 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:46.828 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:46.828 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:46.828 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@300 -- # return 0 00:11:46.828 00:11:46.828 real 0m24.993s 00:11:46.828 user 0m40.327s 00:11:46.828 sys 0m6.921s 00:11:46.828 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:46.828 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:46.828 ************************************ 00:11:46.828 END TEST nvmf_zcopy 00:11:46.828 ************************************ 00:11:46.828 19:31:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:11:46.828 19:31:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:46.828 19:31:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:46.828 19:31:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:46.828 ************************************ 00:11:46.828 START TEST nvmf_nmic 00:11:46.828 ************************************ 00:11:46.828 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:11:46.828 * Looking for test storage... 00:11:46.828 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:46.828 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:46.828 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:11:46.828 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:47.089 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:47.089 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:47.089 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:47.089 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:47.089 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:11:47.089 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:11:47.089 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:11:47.089 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:11:47.089 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:11:47.089 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:11:47.089 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:11:47.089 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:47.089 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:11:47.089 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:11:47.089 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:47.089 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:47.089 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:11:47.089 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:11:47.089 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:47.089 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:11:47.089 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:11:47.089 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:11:47.089 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:11:47.089 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:47.089 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:11:47.089 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:11:47.089 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:47.089 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:47.089 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:11:47.089 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:47.089 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:47.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:47.089 --rc genhtml_branch_coverage=1 00:11:47.089 --rc genhtml_function_coverage=1 00:11:47.089 --rc genhtml_legend=1 00:11:47.089 --rc geninfo_all_blocks=1 00:11:47.089 --rc geninfo_unexecuted_blocks=1 00:11:47.089 00:11:47.089 ' 00:11:47.089 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:47.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:47.089 --rc genhtml_branch_coverage=1 00:11:47.089 --rc genhtml_function_coverage=1 00:11:47.089 --rc genhtml_legend=1 00:11:47.089 --rc geninfo_all_blocks=1 00:11:47.089 --rc geninfo_unexecuted_blocks=1 00:11:47.089 00:11:47.089 ' 00:11:47.089 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:47.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:47.089 --rc genhtml_branch_coverage=1 00:11:47.089 --rc genhtml_function_coverage=1 00:11:47.089 --rc genhtml_legend=1 00:11:47.089 --rc geninfo_all_blocks=1 00:11:47.089 --rc geninfo_unexecuted_blocks=1 00:11:47.089 00:11:47.089 ' 00:11:47.089 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:47.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:47.089 --rc genhtml_branch_coverage=1 00:11:47.089 --rc genhtml_function_coverage=1 00:11:47.089 --rc genhtml_legend=1 00:11:47.089 --rc geninfo_all_blocks=1 00:11:47.089 --rc geninfo_unexecuted_blocks=1 00:11:47.089 00:11:47.089 ' 00:11:47.089 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:47.089 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:11:47.089 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:47.089 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:47.089 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:47.089 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:47.089 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:47.089 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:47.089 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:47.089 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:47.089 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:47.090 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:47.090 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:11:47.090 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:11:47.090 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:47.090 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:47.090 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:47.090 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:47.090 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:47.090 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:11:47.090 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:47.090 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:47.090 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:47.090 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.090 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.090 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.090 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:11:47.090 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.090 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:11:47.090 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:47.090 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:47.090 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:47.090 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:47.090 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:47.090 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:47.090 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:47.090 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:47.090 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:47.090 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:47.090 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:47.090 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:47.090 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:11:47.090 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:47.090 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:47.090 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:47.090 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:47.090 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:47.090 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:47.090 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:47.090 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:47.090 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:47.090 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:47.090 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:47.090 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:47.090 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:47.090 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:47.090 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:47.090 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:47.090 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:47.090 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:47.090 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:47.090 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:47.090 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:47.090 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:47.090 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:47.090 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:47.090 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:47.090 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:47.090 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:47.090 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:47.090 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:47.090 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:47.090 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:47.090 Cannot find device "nvmf_init_br" 00:11:47.090 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:11:47.090 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:47.090 Cannot find device "nvmf_init_br2" 00:11:47.090 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:11:47.090 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:47.090 Cannot find device "nvmf_tgt_br" 00:11:47.090 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # true 00:11:47.090 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:47.090 Cannot find device "nvmf_tgt_br2" 00:11:47.090 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # true 00:11:47.090 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:47.090 Cannot find device "nvmf_init_br" 00:11:47.090 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # true 00:11:47.090 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:47.090 Cannot find device "nvmf_init_br2" 00:11:47.090 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # true 00:11:47.090 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:47.090 Cannot find device "nvmf_tgt_br" 00:11:47.090 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # true 00:11:47.090 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:47.090 Cannot find device "nvmf_tgt_br2" 00:11:47.090 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # true 00:11:47.090 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:47.090 Cannot find device "nvmf_br" 00:11:47.090 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # true 00:11:47.090 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:47.090 Cannot find device "nvmf_init_if" 00:11:47.090 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # true 00:11:47.090 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:47.090 Cannot find device "nvmf_init_if2" 00:11:47.090 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # true 00:11:47.090 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:47.090 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:47.090 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # true 00:11:47.090 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:47.090 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:47.090 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # true 00:11:47.090 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:47.090 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:47.350 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:47.350 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:47.350 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:47.350 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:47.350 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:47.350 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:47.350 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:47.350 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:47.350 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:47.350 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:47.350 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:47.350 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:47.350 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:47.350 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:47.350 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:47.350 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:47.350 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:47.350 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:47.350 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:47.350 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:47.350 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:47.350 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:47.350 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:47.350 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:47.350 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:47.350 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:47.350 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:47.350 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:47.350 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:47.350 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:47.350 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:47.350 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:47.350 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:11:47.350 00:11:47.350 --- 10.0.0.3 ping statistics --- 00:11:47.350 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:47.350 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:11:47.350 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:47.350 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:47.350 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.070 ms 00:11:47.350 00:11:47.350 --- 10.0.0.4 ping statistics --- 00:11:47.350 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:47.350 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:11:47.350 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:47.350 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:47.350 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:11:47.350 00:11:47.350 --- 10.0.0.1 ping statistics --- 00:11:47.350 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:47.350 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:11:47.350 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:47.350 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:47.350 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.046 ms 00:11:47.350 00:11:47.350 --- 10.0.0.2 ping statistics --- 00:11:47.350 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:47.350 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:11:47.350 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:47.350 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@461 -- # return 0 00:11:47.350 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:47.350 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:47.350 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:47.350 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:47.351 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:47.351 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:47.351 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:47.351 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:11:47.351 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:47.351 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:47.351 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:47.351 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=65827 00:11:47.351 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:47.351 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 65827 00:11:47.351 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 65827 ']' 00:11:47.351 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:47.351 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:47.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:47.351 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:47.351 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:47.351 19:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:47.610 [2024-12-05 19:31:40.811680] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:11:47.610 [2024-12-05 19:31:40.811810] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:47.610 [2024-12-05 19:31:40.964137] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:47.610 [2024-12-05 19:31:41.034016] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:47.610 [2024-12-05 19:31:41.034098] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:47.610 [2024-12-05 19:31:41.034137] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:47.610 [2024-12-05 19:31:41.034148] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:47.610 [2024-12-05 19:31:41.034158] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:47.610 [2024-12-05 19:31:41.035488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:47.610 [2024-12-05 19:31:41.035592] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:47.610 [2024-12-05 19:31:41.035715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:47.610 [2024-12-05 19:31:41.035721] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:47.869 [2024-12-05 19:31:41.093798] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:47.869 19:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:47.869 19:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:11:47.869 19:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:47.869 19:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:47.869 19:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:47.869 19:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:47.869 19:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:47.869 19:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.869 19:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:47.869 [2024-12-05 19:31:41.213520] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:47.869 19:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.869 19:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:47.869 19:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.869 19:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:47.869 Malloc0 00:11:47.869 19:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.869 19:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:47.869 19:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.869 19:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:47.869 19:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.869 19:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:47.869 19:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.869 19:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:47.869 19:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.869 19:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:47.869 19:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.869 19:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:47.869 [2024-12-05 19:31:41.282169] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:47.869 test case1: single bdev can't be used in multiple subsystems 00:11:47.869 19:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.869 19:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:11:47.869 19:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:11:47.869 19:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.869 19:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:47.869 19:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.869 19:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:11:47.869 19:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.869 19:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:47.869 19:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.870 19:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:11:47.870 19:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:11:47.870 19:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.870 19:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:48.129 [2024-12-05 19:31:41.305944] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:11:48.129 [2024-12-05 19:31:41.305981] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:11:48.129 [2024-12-05 19:31:41.305992] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.129 request: 00:11:48.129 { 00:11:48.129 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:48.129 "namespace": { 00:11:48.129 "bdev_name": "Malloc0", 00:11:48.129 "no_auto_visible": false, 00:11:48.129 "hide_metadata": false 00:11:48.129 }, 00:11:48.129 "method": "nvmf_subsystem_add_ns", 00:11:48.129 "req_id": 1 00:11:48.129 } 00:11:48.129 Got JSON-RPC error response 00:11:48.129 response: 00:11:48.129 { 00:11:48.129 "code": -32602, 00:11:48.129 "message": "Invalid parameters" 00:11:48.129 } 00:11:48.129 19:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:48.129 19:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:11:48.129 19:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:11:48.129 19:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:11:48.129 Adding namespace failed - expected result. 00:11:48.130 19:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:11:48.130 test case2: host connect to nvmf target in multiple paths 00:11:48.130 19:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:11:48.130 19:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.130 19:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:48.130 [2024-12-05 19:31:41.322165] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:11:48.130 19:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.130 19:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --hostid=bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:11:48.130 19:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --hostid=bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4421 00:11:48.389 19:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:11:48.389 19:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:11:48.389 19:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:48.389 19:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:48.389 19:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:11:50.360 19:31:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:50.360 19:31:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:50.360 19:31:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:50.360 19:31:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:50.360 19:31:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:50.360 19:31:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:11:50.360 19:31:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:50.360 [global] 00:11:50.360 thread=1 00:11:50.360 invalidate=1 00:11:50.360 rw=write 00:11:50.360 time_based=1 00:11:50.360 runtime=1 00:11:50.360 ioengine=libaio 00:11:50.360 direct=1 00:11:50.360 bs=4096 00:11:50.360 iodepth=1 00:11:50.360 norandommap=0 00:11:50.360 numjobs=1 00:11:50.360 00:11:50.360 verify_dump=1 00:11:50.360 verify_backlog=512 00:11:50.360 verify_state_save=0 00:11:50.360 do_verify=1 00:11:50.360 verify=crc32c-intel 00:11:50.360 [job0] 00:11:50.360 filename=/dev/nvme0n1 00:11:50.360 Could not set queue depth (nvme0n1) 00:11:50.634 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:50.634 fio-3.35 00:11:50.634 Starting 1 thread 00:11:51.572 00:11:51.572 job0: (groupid=0, jobs=1): err= 0: pid=65910: Thu Dec 5 19:31:44 2024 00:11:51.572 read: IOPS=2932, BW=11.5MiB/s (12.0MB/s)(11.5MiB/1001msec) 00:11:51.573 slat (nsec): min=11686, max=60999, avg=14153.75, stdev=3646.63 00:11:51.573 clat (usec): min=134, max=356, avg=183.40, stdev=17.58 00:11:51.573 lat (usec): min=152, max=376, avg=197.55, stdev=17.86 00:11:51.573 clat percentiles (usec): 00:11:51.573 | 1.00th=[ 149], 5.00th=[ 157], 10.00th=[ 161], 20.00th=[ 169], 00:11:51.573 | 30.00th=[ 174], 40.00th=[ 180], 50.00th=[ 184], 60.00th=[ 188], 00:11:51.573 | 70.00th=[ 192], 80.00th=[ 196], 90.00th=[ 206], 95.00th=[ 212], 00:11:51.573 | 99.00th=[ 231], 99.50th=[ 241], 99.90th=[ 269], 99.95th=[ 281], 00:11:51.573 | 99.99th=[ 355] 00:11:51.573 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:11:51.573 slat (nsec): min=16471, max=93507, avg=20515.97, stdev=6250.92 00:11:51.573 clat (usec): min=86, max=206, avg=113.31, stdev=14.48 00:11:51.573 lat (usec): min=104, max=300, avg=133.83, stdev=16.39 00:11:51.573 clat percentiles (usec): 00:11:51.573 | 1.00th=[ 91], 5.00th=[ 95], 10.00th=[ 98], 20.00th=[ 102], 00:11:51.573 | 30.00th=[ 105], 40.00th=[ 109], 50.00th=[ 111], 60.00th=[ 114], 00:11:51.573 | 70.00th=[ 118], 80.00th=[ 125], 90.00th=[ 135], 95.00th=[ 141], 00:11:51.573 | 99.00th=[ 159], 99.50th=[ 167], 99.90th=[ 180], 99.95th=[ 202], 00:11:51.573 | 99.99th=[ 206] 00:11:51.573 bw ( KiB/s): min=12288, max=12288, per=100.00%, avg=12288.00, stdev= 0.00, samples=1 00:11:51.573 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:11:51.573 lat (usec) : 100=7.82%, 250=92.09%, 500=0.08% 00:11:51.573 cpu : usr=2.10%, sys=8.40%, ctx=6007, majf=0, minf=5 00:11:51.573 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:51.573 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:51.573 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:51.573 issued rwts: total=2935,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:51.573 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:51.573 00:11:51.573 Run status group 0 (all jobs): 00:11:51.573 READ: bw=11.5MiB/s (12.0MB/s), 11.5MiB/s-11.5MiB/s (12.0MB/s-12.0MB/s), io=11.5MiB (12.0MB), run=1001-1001msec 00:11:51.573 WRITE: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:11:51.573 00:11:51.573 Disk stats (read/write): 00:11:51.573 nvme0n1: ios=2610/2867, merge=0/0, ticks=485/349, in_queue=834, util=91.28% 00:11:51.573 19:31:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:51.573 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:11:51.573 19:31:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:51.573 19:31:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:11:51.573 19:31:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:51.573 19:31:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:51.573 19:31:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:51.573 19:31:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:51.834 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:11:51.834 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:11:51.834 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:11:51.834 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:51.834 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:11:51.834 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:51.834 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:11:51.834 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:51.834 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:51.834 rmmod nvme_tcp 00:11:51.834 rmmod nvme_fabrics 00:11:51.834 rmmod nvme_keyring 00:11:51.834 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:51.834 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:11:51.834 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:11:51.834 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 65827 ']' 00:11:51.834 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 65827 00:11:51.834 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 65827 ']' 00:11:51.834 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 65827 00:11:51.834 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:11:51.834 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:51.834 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65827 00:11:51.834 killing process with pid 65827 00:11:51.834 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:51.834 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:51.835 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65827' 00:11:51.835 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 65827 00:11:51.835 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 65827 00:11:52.092 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:52.092 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:52.092 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:52.092 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:11:52.092 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:11:52.092 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:52.092 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:11:52.092 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:52.092 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:52.092 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:52.092 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:52.092 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:52.092 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:52.092 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:52.092 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:52.092 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:52.092 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:52.092 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:52.352 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:52.352 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:52.352 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:52.352 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:52.352 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:52.352 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:52.352 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:52.352 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:52.352 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@300 -- # return 0 00:11:52.352 00:11:52.352 real 0m5.556s 00:11:52.352 user 0m16.135s 00:11:52.352 sys 0m2.384s 00:11:52.352 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:52.352 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:52.352 ************************************ 00:11:52.352 END TEST nvmf_nmic 00:11:52.352 ************************************ 00:11:52.352 19:31:45 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:52.352 19:31:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:52.352 19:31:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:52.352 19:31:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:52.352 ************************************ 00:11:52.352 START TEST nvmf_fio_target 00:11:52.352 ************************************ 00:11:52.352 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:52.612 * Looking for test storage... 00:11:52.612 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:52.612 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:52.612 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:11:52.612 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:52.612 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:52.612 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:52.612 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:52.612 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:52.612 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:11:52.612 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:11:52.612 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:11:52.612 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:11:52.612 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:11:52.612 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:11:52.612 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:11:52.612 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:52.612 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:11:52.612 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:11:52.612 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:52.612 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:52.612 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:11:52.612 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:11:52.612 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:52.612 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:11:52.612 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:11:52.612 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:11:52.612 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:11:52.612 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:52.612 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:11:52.612 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:11:52.612 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:52.612 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:52.612 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:11:52.612 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:52.612 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:52.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:52.612 --rc genhtml_branch_coverage=1 00:11:52.612 --rc genhtml_function_coverage=1 00:11:52.612 --rc genhtml_legend=1 00:11:52.612 --rc geninfo_all_blocks=1 00:11:52.612 --rc geninfo_unexecuted_blocks=1 00:11:52.612 00:11:52.612 ' 00:11:52.612 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:52.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:52.612 --rc genhtml_branch_coverage=1 00:11:52.612 --rc genhtml_function_coverage=1 00:11:52.612 --rc genhtml_legend=1 00:11:52.612 --rc geninfo_all_blocks=1 00:11:52.612 --rc geninfo_unexecuted_blocks=1 00:11:52.612 00:11:52.612 ' 00:11:52.613 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:52.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:52.613 --rc genhtml_branch_coverage=1 00:11:52.613 --rc genhtml_function_coverage=1 00:11:52.613 --rc genhtml_legend=1 00:11:52.613 --rc geninfo_all_blocks=1 00:11:52.613 --rc geninfo_unexecuted_blocks=1 00:11:52.613 00:11:52.613 ' 00:11:52.613 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:52.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:52.613 --rc genhtml_branch_coverage=1 00:11:52.613 --rc genhtml_function_coverage=1 00:11:52.613 --rc genhtml_legend=1 00:11:52.613 --rc geninfo_all_blocks=1 00:11:52.613 --rc geninfo_unexecuted_blocks=1 00:11:52.613 00:11:52.613 ' 00:11:52.613 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:52.613 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:11:52.613 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:52.613 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:52.613 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:52.613 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:52.613 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:52.613 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:52.613 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:52.613 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:52.613 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:52.613 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:52.613 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:11:52.613 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:11:52.613 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:52.613 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:52.613 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:52.613 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:52.613 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:52.613 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:11:52.613 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:52.613 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:52.613 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:52.613 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.613 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.613 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.613 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:11:52.613 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.613 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:11:52.613 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:52.613 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:52.613 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:52.613 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:52.613 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:52.613 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:52.613 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:52.613 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:52.613 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:52.613 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:52.613 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:52.613 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:52.613 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:52.613 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:11:52.613 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:52.613 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:52.613 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:52.613 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:52.613 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:52.613 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:52.613 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:52.613 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:52.613 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:52.613 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:52.613 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:52.613 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:52.613 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:52.613 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:52.613 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:52.613 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:52.613 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:52.613 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:52.613 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:52.613 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:52.613 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:52.613 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:52.613 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:52.613 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:52.613 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:52.613 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:52.613 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:52.613 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:52.613 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:52.613 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:52.613 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:52.613 Cannot find device "nvmf_init_br" 00:11:52.613 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:11:52.613 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:52.613 Cannot find device "nvmf_init_br2" 00:11:52.613 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:11:52.613 19:31:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:52.613 Cannot find device "nvmf_tgt_br" 00:11:52.613 19:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # true 00:11:52.613 19:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:52.613 Cannot find device "nvmf_tgt_br2" 00:11:52.613 19:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # true 00:11:52.613 19:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:52.613 Cannot find device "nvmf_init_br" 00:11:52.614 19:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # true 00:11:52.614 19:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:52.614 Cannot find device "nvmf_init_br2" 00:11:52.614 19:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # true 00:11:52.614 19:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:52.873 Cannot find device "nvmf_tgt_br" 00:11:52.873 19:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # true 00:11:52.873 19:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:52.873 Cannot find device "nvmf_tgt_br2" 00:11:52.873 19:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # true 00:11:52.873 19:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:52.873 Cannot find device "nvmf_br" 00:11:52.873 19:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # true 00:11:52.873 19:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:52.873 Cannot find device "nvmf_init_if" 00:11:52.873 19:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # true 00:11:52.873 19:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:52.873 Cannot find device "nvmf_init_if2" 00:11:52.873 19:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # true 00:11:52.873 19:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:52.873 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:52.873 19:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # true 00:11:52.873 19:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:52.873 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:52.873 19:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # true 00:11:52.873 19:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:52.873 19:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:52.873 19:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:52.873 19:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:52.873 19:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:52.873 19:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:52.873 19:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:52.873 19:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:52.873 19:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:52.873 19:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:52.873 19:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:52.873 19:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:52.873 19:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:52.873 19:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:52.873 19:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:52.873 19:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:52.873 19:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:52.873 19:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:52.873 19:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:52.873 19:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:52.873 19:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:52.873 19:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:52.873 19:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:53.133 19:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:53.133 19:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:53.133 19:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:53.133 19:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:53.133 19:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:53.133 19:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:53.133 19:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:53.133 19:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:53.133 19:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:53.133 19:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:53.133 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:53.133 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:11:53.133 00:11:53.133 --- 10.0.0.3 ping statistics --- 00:11:53.133 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:53.133 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:11:53.133 19:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:53.133 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:53.133 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.094 ms 00:11:53.133 00:11:53.133 --- 10.0.0.4 ping statistics --- 00:11:53.133 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:53.133 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:11:53.133 19:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:53.133 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:53.133 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:11:53.133 00:11:53.133 --- 10.0.0.1 ping statistics --- 00:11:53.133 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:53.134 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:11:53.134 19:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:53.134 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:53.134 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:11:53.134 00:11:53.134 --- 10.0.0.2 ping statistics --- 00:11:53.134 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:53.134 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:11:53.134 19:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:53.134 19:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@461 -- # return 0 00:11:53.134 19:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:53.134 19:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:53.134 19:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:53.134 19:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:53.134 19:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:53.134 19:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:53.134 19:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:53.134 19:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:11:53.134 19:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:53.134 19:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:53.134 19:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:53.134 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:53.134 19:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=66138 00:11:53.134 19:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:53.134 19:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 66138 00:11:53.134 19:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 66138 ']' 00:11:53.134 19:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:53.134 19:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:53.134 19:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:53.134 19:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:53.134 19:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:53.134 [2024-12-05 19:31:46.473340] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:11:53.134 [2024-12-05 19:31:46.473468] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:53.394 [2024-12-05 19:31:46.622251] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:53.394 [2024-12-05 19:31:46.682268] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:53.394 [2024-12-05 19:31:46.682328] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:53.394 [2024-12-05 19:31:46.682356] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:53.394 [2024-12-05 19:31:46.682364] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:53.394 [2024-12-05 19:31:46.682371] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:53.394 [2024-12-05 19:31:46.683538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:53.394 [2024-12-05 19:31:46.683591] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:53.394 [2024-12-05 19:31:46.683820] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:53.394 [2024-12-05 19:31:46.683823] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:53.394 [2024-12-05 19:31:46.740031] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:54.332 19:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:54.332 19:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:11:54.332 19:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:54.332 19:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:54.332 19:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:54.332 19:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:54.332 19:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:54.591 [2024-12-05 19:31:47.838447] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:54.591 19:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:54.850 19:31:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:11:54.850 19:31:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:55.109 19:31:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:11:55.109 19:31:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:55.368 19:31:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:11:55.368 19:31:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:55.627 19:31:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:11:55.627 19:31:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:11:55.886 19:31:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:56.475 19:31:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:11:56.475 19:31:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:56.733 19:31:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:11:56.733 19:31:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:56.991 19:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:11:56.991 19:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:11:57.249 19:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:57.507 19:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:57.507 19:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:57.765 19:31:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:57.765 19:31:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:58.023 19:31:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:58.340 [2024-12-05 19:31:51.593649] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:58.340 19:31:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:11:58.598 19:31:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:11:58.856 19:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --hostid=bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:11:58.856 19:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:11:58.856 19:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:11:58.856 19:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:58.856 19:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:11:58.856 19:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:11:58.856 19:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:12:01.382 19:31:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:01.382 19:31:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:01.382 19:31:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:01.382 19:31:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:12:01.382 19:31:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:01.382 19:31:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:12:01.382 19:31:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:12:01.382 [global] 00:12:01.382 thread=1 00:12:01.382 invalidate=1 00:12:01.382 rw=write 00:12:01.382 time_based=1 00:12:01.382 runtime=1 00:12:01.382 ioengine=libaio 00:12:01.382 direct=1 00:12:01.382 bs=4096 00:12:01.382 iodepth=1 00:12:01.382 norandommap=0 00:12:01.382 numjobs=1 00:12:01.382 00:12:01.382 verify_dump=1 00:12:01.382 verify_backlog=512 00:12:01.382 verify_state_save=0 00:12:01.382 do_verify=1 00:12:01.382 verify=crc32c-intel 00:12:01.382 [job0] 00:12:01.382 filename=/dev/nvme0n1 00:12:01.382 [job1] 00:12:01.382 filename=/dev/nvme0n2 00:12:01.382 [job2] 00:12:01.382 filename=/dev/nvme0n3 00:12:01.382 [job3] 00:12:01.382 filename=/dev/nvme0n4 00:12:01.382 Could not set queue depth (nvme0n1) 00:12:01.382 Could not set queue depth (nvme0n2) 00:12:01.382 Could not set queue depth (nvme0n3) 00:12:01.382 Could not set queue depth (nvme0n4) 00:12:01.382 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:01.382 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:01.382 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:01.382 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:01.382 fio-3.35 00:12:01.382 Starting 4 threads 00:12:02.318 00:12:02.318 job0: (groupid=0, jobs=1): err= 0: pid=66328: Thu Dec 5 19:31:55 2024 00:12:02.318 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:12:02.318 slat (nsec): min=10173, max=33438, avg=13477.84, stdev=2452.93 00:12:02.318 clat (usec): min=239, max=608, avg=361.30, stdev=46.67 00:12:02.318 lat (usec): min=251, max=619, avg=374.77, stdev=46.89 00:12:02.318 clat percentiles (usec): 00:12:02.318 | 1.00th=[ 273], 5.00th=[ 318], 10.00th=[ 322], 20.00th=[ 330], 00:12:02.318 | 30.00th=[ 334], 40.00th=[ 343], 50.00th=[ 347], 60.00th=[ 351], 00:12:02.318 | 70.00th=[ 363], 80.00th=[ 400], 90.00th=[ 433], 95.00th=[ 445], 00:12:02.318 | 99.00th=[ 545], 99.50th=[ 586], 99.90th=[ 611], 99.95th=[ 611], 00:12:02.318 | 99.99th=[ 611] 00:12:02.318 write: IOPS=1643, BW=6573KiB/s (6731kB/s)(6580KiB/1001msec); 0 zone resets 00:12:02.318 slat (nsec): min=11250, max=97459, avg=21288.48, stdev=5656.10 00:12:02.318 clat (usec): min=149, max=2567, avg=233.50, stdev=86.33 00:12:02.318 lat (usec): min=168, max=2584, avg=254.79, stdev=86.67 00:12:02.318 clat percentiles (usec): 00:12:02.318 | 1.00th=[ 161], 5.00th=[ 172], 10.00th=[ 178], 20.00th=[ 190], 00:12:02.318 | 30.00th=[ 202], 40.00th=[ 215], 50.00th=[ 237], 60.00th=[ 249], 00:12:02.318 | 70.00th=[ 258], 80.00th=[ 265], 90.00th=[ 277], 95.00th=[ 285], 00:12:02.318 | 99.00th=[ 310], 99.50th=[ 375], 99.90th=[ 1926], 99.95th=[ 2573], 00:12:02.318 | 99.99th=[ 2573] 00:12:02.318 bw ( KiB/s): min= 8192, max= 8192, per=25.95%, avg=8192.00, stdev= 0.00, samples=1 00:12:02.318 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:12:02.318 lat (usec) : 250=31.81%, 500=67.37%, 750=0.69%, 1000=0.03% 00:12:02.318 lat (msec) : 2=0.06%, 4=0.03% 00:12:02.318 cpu : usr=1.10%, sys=4.80%, ctx=3181, majf=0, minf=7 00:12:02.318 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:02.318 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:02.318 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:02.318 issued rwts: total=1536,1645,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:02.318 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:02.318 job1: (groupid=0, jobs=1): err= 0: pid=66329: Thu Dec 5 19:31:55 2024 00:12:02.318 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:12:02.318 slat (nsec): min=12054, max=46108, avg=16982.93, stdev=3764.31 00:12:02.318 clat (usec): min=235, max=593, avg=357.66, stdev=46.74 00:12:02.318 lat (usec): min=252, max=611, avg=374.64, stdev=46.71 00:12:02.318 clat percentiles (usec): 00:12:02.318 | 1.00th=[ 269], 5.00th=[ 314], 10.00th=[ 322], 20.00th=[ 326], 00:12:02.318 | 30.00th=[ 334], 40.00th=[ 338], 50.00th=[ 343], 60.00th=[ 347], 00:12:02.318 | 70.00th=[ 359], 80.00th=[ 396], 90.00th=[ 429], 95.00th=[ 441], 00:12:02.318 | 99.00th=[ 537], 99.50th=[ 570], 99.90th=[ 594], 99.95th=[ 594], 00:12:02.318 | 99.99th=[ 594] 00:12:02.318 write: IOPS=1645, BW=6581KiB/s (6739kB/s)(6588KiB/1001msec); 0 zone resets 00:12:02.318 slat (nsec): min=13251, max=70869, avg=22396.52, stdev=5396.52 00:12:02.318 clat (usec): min=105, max=2658, avg=231.86, stdev=87.63 00:12:02.318 lat (usec): min=132, max=2680, avg=254.26, stdev=87.84 00:12:02.318 clat percentiles (usec): 00:12:02.318 | 1.00th=[ 155], 5.00th=[ 169], 10.00th=[ 176], 20.00th=[ 188], 00:12:02.318 | 30.00th=[ 200], 40.00th=[ 215], 50.00th=[ 235], 60.00th=[ 245], 00:12:02.318 | 70.00th=[ 255], 80.00th=[ 265], 90.00th=[ 277], 95.00th=[ 285], 00:12:02.318 | 99.00th=[ 314], 99.50th=[ 400], 99.90th=[ 2008], 99.95th=[ 2671], 00:12:02.318 | 99.99th=[ 2671] 00:12:02.318 bw ( KiB/s): min= 8192, max= 8192, per=25.95%, avg=8192.00, stdev= 0.00, samples=1 00:12:02.318 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:12:02.318 lat (usec) : 250=33.30%, 500=65.91%, 750=0.69% 00:12:02.318 lat (msec) : 2=0.03%, 4=0.06% 00:12:02.318 cpu : usr=1.50%, sys=5.20%, ctx=3183, majf=0, minf=5 00:12:02.318 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:02.318 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:02.318 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:02.318 issued rwts: total=1536,1647,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:02.318 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:02.318 job2: (groupid=0, jobs=1): err= 0: pid=66330: Thu Dec 5 19:31:55 2024 00:12:02.318 read: IOPS=3065, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:12:02.319 slat (nsec): min=10610, max=27922, avg=12345.65, stdev=1840.20 00:12:02.319 clat (usec): min=142, max=533, avg=167.86, stdev=13.78 00:12:02.319 lat (usec): min=155, max=544, avg=180.21, stdev=14.18 00:12:02.319 clat percentiles (usec): 00:12:02.319 | 1.00th=[ 149], 5.00th=[ 153], 10.00th=[ 155], 20.00th=[ 157], 00:12:02.319 | 30.00th=[ 161], 40.00th=[ 163], 50.00th=[ 165], 60.00th=[ 169], 00:12:02.319 | 70.00th=[ 174], 80.00th=[ 178], 90.00th=[ 186], 95.00th=[ 192], 00:12:02.319 | 99.00th=[ 204], 99.50th=[ 210], 99.90th=[ 229], 99.95th=[ 233], 00:12:02.319 | 99.99th=[ 537] 00:12:02.319 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:12:02.319 slat (nsec): min=13385, max=83990, avg=18912.91, stdev=3851.46 00:12:02.319 clat (usec): min=101, max=502, avg=123.80, stdev=14.59 00:12:02.319 lat (usec): min=118, max=522, avg=142.72, stdev=15.87 00:12:02.319 clat percentiles (usec): 00:12:02.319 | 1.00th=[ 104], 5.00th=[ 108], 10.00th=[ 109], 20.00th=[ 113], 00:12:02.319 | 30.00th=[ 116], 40.00th=[ 119], 50.00th=[ 122], 60.00th=[ 126], 00:12:02.319 | 70.00th=[ 130], 80.00th=[ 135], 90.00th=[ 141], 95.00th=[ 147], 00:12:02.319 | 99.00th=[ 161], 99.50th=[ 165], 99.90th=[ 184], 99.95th=[ 210], 00:12:02.319 | 99.99th=[ 502] 00:12:02.319 bw ( KiB/s): min=12288, max=12288, per=38.92%, avg=12288.00, stdev= 0.00, samples=1 00:12:02.319 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:12:02.319 lat (usec) : 250=99.97%, 750=0.03% 00:12:02.319 cpu : usr=2.20%, sys=7.70%, ctx=6144, majf=0, minf=15 00:12:02.319 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:02.319 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:02.319 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:02.319 issued rwts: total=3069,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:02.319 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:02.319 job3: (groupid=0, jobs=1): err= 0: pid=66332: Thu Dec 5 19:31:55 2024 00:12:02.319 read: IOPS=1162, BW=4651KiB/s (4763kB/s)(4656KiB/1001msec) 00:12:02.319 slat (nsec): min=13820, max=87108, avg=27245.32, stdev=12583.60 00:12:02.319 clat (usec): min=160, max=7968, avg=409.10, stdev=252.90 00:12:02.319 lat (usec): min=175, max=7987, avg=436.34, stdev=256.39 00:12:02.319 clat percentiles (usec): 00:12:02.319 | 1.00th=[ 223], 5.00th=[ 318], 10.00th=[ 322], 20.00th=[ 330], 00:12:02.319 | 30.00th=[ 334], 40.00th=[ 338], 50.00th=[ 347], 60.00th=[ 359], 00:12:02.319 | 70.00th=[ 453], 80.00th=[ 490], 90.00th=[ 594], 95.00th=[ 627], 00:12:02.319 | 99.00th=[ 676], 99.50th=[ 685], 99.90th=[ 2376], 99.95th=[ 7963], 00:12:02.319 | 99.99th=[ 7963] 00:12:02.319 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:12:02.319 slat (usec): min=19, max=103, avg=33.18, stdev=11.04 00:12:02.319 clat (usec): min=114, max=671, avg=282.02, stdev=97.10 00:12:02.319 lat (usec): min=139, max=714, avg=315.20, stdev=105.31 00:12:02.319 clat percentiles (usec): 00:12:02.319 | 1.00th=[ 128], 5.00th=[ 151], 10.00th=[ 190], 20.00th=[ 198], 00:12:02.319 | 30.00th=[ 206], 40.00th=[ 227], 50.00th=[ 265], 60.00th=[ 285], 00:12:02.319 | 70.00th=[ 322], 80.00th=[ 375], 90.00th=[ 441], 95.00th=[ 461], 00:12:02.319 | 99.00th=[ 486], 99.50th=[ 494], 99.90th=[ 570], 99.95th=[ 668], 00:12:02.319 | 99.99th=[ 668] 00:12:02.319 bw ( KiB/s): min= 6224, max= 6224, per=19.72%, avg=6224.00, stdev= 0.00, samples=1 00:12:02.319 iops : min= 1556, max= 1556, avg=1556.00, stdev= 0.00, samples=1 00:12:02.319 lat (usec) : 250=25.59%, 500=66.44%, 750=7.81%, 1000=0.04% 00:12:02.319 lat (msec) : 2=0.04%, 4=0.04%, 10=0.04% 00:12:02.319 cpu : usr=1.40%, sys=7.00%, ctx=2701, majf=0, minf=11 00:12:02.319 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:02.319 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:02.319 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:02.319 issued rwts: total=1164,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:02.319 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:02.319 00:12:02.319 Run status group 0 (all jobs): 00:12:02.319 READ: bw=28.5MiB/s (29.9MB/s), 4651KiB/s-12.0MiB/s (4763kB/s-12.6MB/s), io=28.5MiB (29.9MB), run=1001-1001msec 00:12:02.319 WRITE: bw=30.8MiB/s (32.3MB/s), 6138KiB/s-12.0MiB/s (6285kB/s-12.6MB/s), io=30.9MiB (32.4MB), run=1001-1001msec 00:12:02.319 00:12:02.319 Disk stats (read/write): 00:12:02.319 nvme0n1: ios=1315/1536, merge=0/0, ticks=436/328, in_queue=764, util=87.37% 00:12:02.319 nvme0n2: ios=1304/1536, merge=0/0, ticks=481/347, in_queue=828, util=88.45% 00:12:02.319 nvme0n3: ios=2560/2720, merge=0/0, ticks=442/353, in_queue=795, util=89.23% 00:12:02.319 nvme0n4: ios=1024/1198, merge=0/0, ticks=429/385, in_queue=814, util=89.79% 00:12:02.319 19:31:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:12:02.319 [global] 00:12:02.319 thread=1 00:12:02.319 invalidate=1 00:12:02.319 rw=randwrite 00:12:02.319 time_based=1 00:12:02.319 runtime=1 00:12:02.319 ioengine=libaio 00:12:02.319 direct=1 00:12:02.319 bs=4096 00:12:02.319 iodepth=1 00:12:02.319 norandommap=0 00:12:02.319 numjobs=1 00:12:02.319 00:12:02.319 verify_dump=1 00:12:02.319 verify_backlog=512 00:12:02.319 verify_state_save=0 00:12:02.319 do_verify=1 00:12:02.319 verify=crc32c-intel 00:12:02.319 [job0] 00:12:02.319 filename=/dev/nvme0n1 00:12:02.319 [job1] 00:12:02.319 filename=/dev/nvme0n2 00:12:02.319 [job2] 00:12:02.319 filename=/dev/nvme0n3 00:12:02.319 [job3] 00:12:02.319 filename=/dev/nvme0n4 00:12:02.577 Could not set queue depth (nvme0n1) 00:12:02.577 Could not set queue depth (nvme0n2) 00:12:02.577 Could not set queue depth (nvme0n3) 00:12:02.577 Could not set queue depth (nvme0n4) 00:12:02.577 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:02.577 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:02.577 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:02.577 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:02.577 fio-3.35 00:12:02.577 Starting 4 threads 00:12:03.952 00:12:03.952 job0: (groupid=0, jobs=1): err= 0: pid=66395: Thu Dec 5 19:31:57 2024 00:12:03.952 read: IOPS=2791, BW=10.9MiB/s (11.4MB/s)(10.9MiB/1001msec) 00:12:03.952 slat (nsec): min=8086, max=33480, avg=12403.88, stdev=2100.65 00:12:03.952 clat (usec): min=135, max=2224, avg=180.57, stdev=62.48 00:12:03.952 lat (usec): min=146, max=2235, avg=192.97, stdev=62.51 00:12:03.952 clat percentiles (usec): 00:12:03.952 | 1.00th=[ 141], 5.00th=[ 147], 10.00th=[ 149], 20.00th=[ 153], 00:12:03.952 | 30.00th=[ 157], 40.00th=[ 161], 50.00th=[ 163], 60.00th=[ 167], 00:12:03.952 | 70.00th=[ 174], 80.00th=[ 180], 90.00th=[ 249], 95.00th=[ 318], 00:12:03.952 | 99.00th=[ 343], 99.50th=[ 347], 99.90th=[ 537], 99.95th=[ 807], 00:12:03.952 | 99.99th=[ 2212] 00:12:03.952 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:12:03.952 slat (usec): min=12, max=398, avg=19.06, stdev= 8.05 00:12:03.952 clat (usec): min=95, max=2352, avg=128.11, stdev=47.75 00:12:03.952 lat (usec): min=111, max=2372, avg=147.16, stdev=48.23 00:12:03.952 clat percentiles (usec): 00:12:03.952 | 1.00th=[ 99], 5.00th=[ 104], 10.00th=[ 109], 20.00th=[ 113], 00:12:03.952 | 30.00th=[ 118], 40.00th=[ 121], 50.00th=[ 124], 60.00th=[ 127], 00:12:03.952 | 70.00th=[ 131], 80.00th=[ 137], 90.00th=[ 149], 95.00th=[ 167], 00:12:03.952 | 99.00th=[ 208], 99.50th=[ 227], 99.90th=[ 457], 99.95th=[ 807], 00:12:03.952 | 99.99th=[ 2343] 00:12:03.952 bw ( KiB/s): min=12288, max=12288, per=30.30%, avg=12288.00, stdev= 0.00, samples=1 00:12:03.952 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:12:03.952 lat (usec) : 100=0.89%, 250=94.44%, 500=4.57%, 750=0.03%, 1000=0.03% 00:12:03.952 lat (msec) : 4=0.03% 00:12:03.952 cpu : usr=1.90%, sys=7.70%, ctx=5867, majf=0, minf=9 00:12:03.952 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:03.952 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:03.952 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:03.952 issued rwts: total=2794,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:03.952 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:03.952 job1: (groupid=0, jobs=1): err= 0: pid=66396: Thu Dec 5 19:31:57 2024 00:12:03.952 read: IOPS=1746, BW=6985KiB/s (7153kB/s)(6992KiB/1001msec) 00:12:03.952 slat (nsec): min=11540, max=47220, avg=13780.61, stdev=2456.94 00:12:03.952 clat (usec): min=151, max=1631, avg=275.91, stdev=46.44 00:12:03.952 lat (usec): min=170, max=1643, avg=289.69, stdev=46.38 00:12:03.952 clat percentiles (usec): 00:12:03.952 | 1.00th=[ 231], 5.00th=[ 247], 10.00th=[ 251], 20.00th=[ 260], 00:12:03.952 | 30.00th=[ 265], 40.00th=[ 269], 50.00th=[ 273], 60.00th=[ 277], 00:12:03.952 | 70.00th=[ 281], 80.00th=[ 285], 90.00th=[ 297], 95.00th=[ 314], 00:12:03.952 | 99.00th=[ 379], 99.50th=[ 404], 99.90th=[ 816], 99.95th=[ 1631], 00:12:03.952 | 99.99th=[ 1631] 00:12:03.952 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:12:03.952 slat (usec): min=16, max=470, avg=25.31, stdev=14.08 00:12:03.952 clat (usec): min=103, max=455, avg=212.32, stdev=42.37 00:12:03.952 lat (usec): min=125, max=649, avg=237.62, stdev=49.98 00:12:03.952 clat percentiles (usec): 00:12:03.952 | 1.00th=[ 121], 5.00th=[ 172], 10.00th=[ 184], 20.00th=[ 190], 00:12:03.952 | 30.00th=[ 196], 40.00th=[ 200], 50.00th=[ 204], 60.00th=[ 208], 00:12:03.952 | 70.00th=[ 215], 80.00th=[ 223], 90.00th=[ 289], 95.00th=[ 306], 00:12:03.952 | 99.00th=[ 351], 99.50th=[ 367], 99.90th=[ 383], 99.95th=[ 420], 00:12:03.952 | 99.99th=[ 457] 00:12:03.952 bw ( KiB/s): min= 8192, max= 8192, per=20.20%, avg=8192.00, stdev= 0.00, samples=1 00:12:03.952 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:12:03.952 lat (usec) : 250=50.79%, 500=49.05%, 750=0.08%, 1000=0.05% 00:12:03.952 lat (msec) : 2=0.03% 00:12:03.952 cpu : usr=1.40%, sys=6.10%, ctx=3797, majf=0, minf=21 00:12:03.952 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:03.952 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:03.952 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:03.952 issued rwts: total=1748,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:03.952 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:03.952 job2: (groupid=0, jobs=1): err= 0: pid=66397: Thu Dec 5 19:31:57 2024 00:12:03.952 read: IOPS=1974, BW=7896KiB/s (8086kB/s)(7904KiB/1001msec) 00:12:03.952 slat (usec): min=11, max=114, avg=14.20, stdev= 4.18 00:12:03.952 clat (usec): min=148, max=662, avg=273.14, stdev=34.20 00:12:03.952 lat (usec): min=160, max=674, avg=287.34, stdev=34.32 00:12:03.952 clat percentiles (usec): 00:12:03.952 | 1.00th=[ 223], 5.00th=[ 241], 10.00th=[ 245], 20.00th=[ 253], 00:12:03.952 | 30.00th=[ 260], 40.00th=[ 265], 50.00th=[ 269], 60.00th=[ 273], 00:12:03.952 | 70.00th=[ 277], 80.00th=[ 285], 90.00th=[ 302], 95.00th=[ 326], 00:12:03.952 | 99.00th=[ 424], 99.50th=[ 474], 99.90th=[ 652], 99.95th=[ 660], 00:12:03.952 | 99.99th=[ 660] 00:12:03.952 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:12:03.952 slat (nsec): min=16047, max=97143, avg=20404.48, stdev=4510.85 00:12:03.952 clat (usec): min=101, max=262, avg=187.39, stdev=30.56 00:12:03.952 lat (usec): min=118, max=321, avg=207.80, stdev=31.42 00:12:03.952 clat percentiles (usec): 00:12:03.952 | 1.00th=[ 109], 5.00th=[ 117], 10.00th=[ 128], 20.00th=[ 180], 00:12:03.952 | 30.00th=[ 186], 40.00th=[ 190], 50.00th=[ 196], 60.00th=[ 200], 00:12:03.952 | 70.00th=[ 204], 80.00th=[ 208], 90.00th=[ 215], 95.00th=[ 221], 00:12:03.952 | 99.00th=[ 241], 99.50th=[ 247], 99.90th=[ 255], 99.95th=[ 260], 00:12:03.952 | 99.99th=[ 265] 00:12:03.952 bw ( KiB/s): min= 8192, max= 8192, per=20.20%, avg=8192.00, stdev= 0.00, samples=1 00:12:03.952 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:12:03.952 lat (usec) : 250=58.52%, 500=41.38%, 750=0.10% 00:12:03.952 cpu : usr=2.00%, sys=5.00%, ctx=4024, majf=0, minf=11 00:12:03.952 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:03.952 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:03.953 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:03.953 issued rwts: total=1976,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:03.953 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:03.953 job3: (groupid=0, jobs=1): err= 0: pid=66398: Thu Dec 5 19:31:57 2024 00:12:03.953 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:12:03.953 slat (nsec): min=10229, max=32562, avg=12058.87, stdev=1757.54 00:12:03.953 clat (usec): min=146, max=4184, avg=191.61, stdev=111.43 00:12:03.953 lat (usec): min=157, max=4202, avg=203.66, stdev=111.80 00:12:03.953 clat percentiles (usec): 00:12:03.953 | 1.00th=[ 151], 5.00th=[ 155], 10.00th=[ 157], 20.00th=[ 161], 00:12:03.953 | 30.00th=[ 165], 40.00th=[ 167], 50.00th=[ 172], 60.00th=[ 176], 00:12:03.953 | 70.00th=[ 180], 80.00th=[ 190], 90.00th=[ 273], 95.00th=[ 314], 00:12:03.953 | 99.00th=[ 334], 99.50th=[ 343], 99.90th=[ 1029], 99.95th=[ 3228], 00:12:03.953 | 99.99th=[ 4178] 00:12:03.953 write: IOPS=2977, BW=11.6MiB/s (12.2MB/s)(11.6MiB/1001msec); 0 zone resets 00:12:03.953 slat (nsec): min=12925, max=77798, avg=17945.18, stdev=4075.30 00:12:03.953 clat (usec): min=103, max=6173, avg=139.97, stdev=138.19 00:12:03.953 lat (usec): min=119, max=6189, avg=157.92, stdev=138.47 00:12:03.953 clat percentiles (usec): 00:12:03.953 | 1.00th=[ 111], 5.00th=[ 114], 10.00th=[ 117], 20.00th=[ 121], 00:12:03.953 | 30.00th=[ 125], 40.00th=[ 128], 50.00th=[ 131], 60.00th=[ 135], 00:12:03.953 | 70.00th=[ 139], 80.00th=[ 145], 90.00th=[ 155], 95.00th=[ 167], 00:12:03.953 | 99.00th=[ 227], 99.50th=[ 277], 99.90th=[ 2212], 99.95th=[ 3326], 00:12:03.953 | 99.99th=[ 6194] 00:12:03.953 bw ( KiB/s): min=12288, max=12288, per=30.30%, avg=12288.00, stdev= 0.00, samples=1 00:12:03.953 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:12:03.953 lat (usec) : 250=94.21%, 500=5.52%, 750=0.07%, 1000=0.04% 00:12:03.953 lat (msec) : 2=0.07%, 4=0.05%, 10=0.04% 00:12:03.953 cpu : usr=1.80%, sys=7.00%, ctx=5544, majf=0, minf=5 00:12:03.953 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:03.953 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:03.953 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:03.953 issued rwts: total=2560,2980,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:03.953 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:03.953 00:12:03.953 Run status group 0 (all jobs): 00:12:03.953 READ: bw=35.4MiB/s (37.1MB/s), 6985KiB/s-10.9MiB/s (7153kB/s-11.4MB/s), io=35.5MiB (37.2MB), run=1001-1001msec 00:12:03.953 WRITE: bw=39.6MiB/s (41.5MB/s), 8184KiB/s-12.0MiB/s (8380kB/s-12.6MB/s), io=39.6MiB (41.6MB), run=1001-1001msec 00:12:03.953 00:12:03.953 Disk stats (read/write): 00:12:03.953 nvme0n1: ios=2610/2758, merge=0/0, ticks=448/367, in_queue=815, util=88.48% 00:12:03.953 nvme0n2: ios=1585/1791, merge=0/0, ticks=452/398, in_queue=850, util=89.20% 00:12:03.953 nvme0n3: ios=1536/2030, merge=0/0, ticks=424/385, in_queue=809, util=89.45% 00:12:03.953 nvme0n4: ios=2398/2560, merge=0/0, ticks=455/357, in_queue=812, util=89.51% 00:12:03.953 19:31:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:12:03.953 [global] 00:12:03.953 thread=1 00:12:03.953 invalidate=1 00:12:03.953 rw=write 00:12:03.953 time_based=1 00:12:03.953 runtime=1 00:12:03.953 ioengine=libaio 00:12:03.953 direct=1 00:12:03.953 bs=4096 00:12:03.953 iodepth=128 00:12:03.953 norandommap=0 00:12:03.953 numjobs=1 00:12:03.953 00:12:03.953 verify_dump=1 00:12:03.953 verify_backlog=512 00:12:03.953 verify_state_save=0 00:12:03.953 do_verify=1 00:12:03.953 verify=crc32c-intel 00:12:03.953 [job0] 00:12:03.953 filename=/dev/nvme0n1 00:12:03.953 [job1] 00:12:03.953 filename=/dev/nvme0n2 00:12:03.953 [job2] 00:12:03.953 filename=/dev/nvme0n3 00:12:03.953 [job3] 00:12:03.953 filename=/dev/nvme0n4 00:12:03.953 Could not set queue depth (nvme0n1) 00:12:03.953 Could not set queue depth (nvme0n2) 00:12:03.953 Could not set queue depth (nvme0n3) 00:12:03.953 Could not set queue depth (nvme0n4) 00:12:03.953 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:03.953 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:03.953 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:03.953 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:03.953 fio-3.35 00:12:03.953 Starting 4 threads 00:12:05.330 00:12:05.330 job0: (groupid=0, jobs=1): err= 0: pid=66453: Thu Dec 5 19:31:58 2024 00:12:05.330 read: IOPS=2678, BW=10.5MiB/s (11.0MB/s)(10.5MiB/1004msec) 00:12:05.330 slat (usec): min=5, max=13217, avg=205.51, stdev=1248.00 00:12:05.330 clat (usec): min=1007, max=53905, avg=25497.32, stdev=10468.38 00:12:05.330 lat (usec): min=8166, max=53921, avg=25702.84, stdev=10480.06 00:12:05.330 clat percentiles (usec): 00:12:05.330 | 1.00th=[ 8455], 5.00th=[13435], 10.00th=[14484], 20.00th=[15926], 00:12:05.330 | 30.00th=[18744], 40.00th=[22152], 50.00th=[23200], 60.00th=[24249], 00:12:05.330 | 70.00th=[31589], 80.00th=[33162], 90.00th=[42206], 95.00th=[43779], 00:12:05.330 | 99.00th=[53740], 99.50th=[53740], 99.90th=[53740], 99.95th=[53740], 00:12:05.330 | 99.99th=[53740] 00:12:05.330 write: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec); 0 zone resets 00:12:05.330 slat (usec): min=10, max=12088, avg=140.06, stdev=782.11 00:12:05.330 clat (usec): min=8010, max=43357, avg=18733.39, stdev=6603.39 00:12:05.330 lat (usec): min=10846, max=43403, avg=18873.45, stdev=6588.19 00:12:05.330 clat percentiles (usec): 00:12:05.330 | 1.00th=[10683], 5.00th=[11731], 10.00th=[11863], 20.00th=[13042], 00:12:05.330 | 30.00th=[13698], 40.00th=[15533], 50.00th=[17957], 60.00th=[18744], 00:12:05.330 | 70.00th=[20841], 80.00th=[22938], 90.00th=[27395], 95.00th=[32113], 00:12:05.330 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:12:05.330 | 99.99th=[43254] 00:12:05.330 bw ( KiB/s): min=11807, max=12792, per=20.81%, avg=12299.50, stdev=696.50, samples=2 00:12:05.330 iops : min= 2951, max= 3198, avg=3074.50, stdev=174.66, samples=2 00:12:05.330 lat (msec) : 2=0.02%, 10=0.90%, 20=52.94%, 50=43.99%, 100=2.15% 00:12:05.330 cpu : usr=2.69%, sys=6.58%, ctx=182, majf=0, minf=9 00:12:05.330 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:12:05.330 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:05.330 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:05.330 issued rwts: total=2689,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:05.330 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:05.330 job1: (groupid=0, jobs=1): err= 0: pid=66454: Thu Dec 5 19:31:58 2024 00:12:05.330 read: IOPS=3381, BW=13.2MiB/s (13.8MB/s)(13.3MiB/1005msec) 00:12:05.330 slat (usec): min=6, max=18852, avg=156.08, stdev=996.27 00:12:05.330 clat (usec): min=1445, max=61170, avg=20778.94, stdev=6767.89 00:12:05.330 lat (usec): min=5193, max=61186, avg=20935.02, stdev=6825.95 00:12:05.330 clat percentiles (usec): 00:12:05.330 | 1.00th=[ 5866], 5.00th=[13698], 10.00th=[16450], 20.00th=[16581], 00:12:05.330 | 30.00th=[16909], 40.00th=[18744], 50.00th=[19792], 60.00th=[20317], 00:12:05.330 | 70.00th=[21627], 80.00th=[24249], 90.00th=[27657], 95.00th=[27919], 00:12:05.330 | 99.00th=[54264], 99.50th=[57934], 99.90th=[61080], 99.95th=[61080], 00:12:05.330 | 99.99th=[61080] 00:12:05.330 write: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec); 0 zone resets 00:12:05.330 slat (usec): min=6, max=13027, avg=123.89, stdev=673.99 00:12:05.330 clat (usec): min=5134, max=61109, avg=15832.57, stdev=8462.59 00:12:05.330 lat (usec): min=5385, max=61122, avg=15956.46, stdev=8492.94 00:12:05.330 clat percentiles (usec): 00:12:05.330 | 1.00th=[ 7504], 5.00th=[10290], 10.00th=[10552], 20.00th=[11076], 00:12:05.330 | 30.00th=[11863], 40.00th=[12780], 50.00th=[13173], 60.00th=[13566], 00:12:05.330 | 70.00th=[13829], 80.00th=[16057], 90.00th=[29492], 95.00th=[37487], 00:12:05.330 | 99.00th=[49546], 99.50th=[50070], 99.90th=[53216], 99.95th=[53216], 00:12:05.330 | 99.99th=[61080] 00:12:05.330 bw ( KiB/s): min=13707, max=14992, per=24.28%, avg=14349.50, stdev=908.63, samples=2 00:12:05.330 iops : min= 3426, max= 3748, avg=3587.00, stdev=227.69, samples=2 00:12:05.330 lat (msec) : 2=0.01%, 10=2.75%, 20=68.95%, 50=27.21%, 100=1.07% 00:12:05.330 cpu : usr=3.88%, sys=8.67%, ctx=200, majf=0, minf=2 00:12:05.330 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:12:05.330 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:05.330 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:05.330 issued rwts: total=3398,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:05.330 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:05.331 job2: (groupid=0, jobs=1): err= 0: pid=66455: Thu Dec 5 19:31:58 2024 00:12:05.331 read: IOPS=3666, BW=14.3MiB/s (15.0MB/s)(14.4MiB/1004msec) 00:12:05.331 slat (usec): min=4, max=7703, avg=127.65, stdev=532.48 00:12:05.331 clat (usec): min=1088, max=29060, avg=16299.18, stdev=4739.79 00:12:05.331 lat (usec): min=4702, max=29079, avg=16426.84, stdev=4746.25 00:12:05.331 clat percentiles (usec): 00:12:05.331 | 1.00th=[10028], 5.00th=[12387], 10.00th=[12780], 20.00th=[13042], 00:12:05.331 | 30.00th=[13173], 40.00th=[13304], 50.00th=[13566], 60.00th=[13829], 00:12:05.331 | 70.00th=[19530], 80.00th=[22414], 90.00th=[22938], 95.00th=[24511], 00:12:05.331 | 99.00th=[28181], 99.50th=[28443], 99.90th=[28967], 99.95th=[28967], 00:12:05.331 | 99.99th=[28967] 00:12:05.331 write: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec); 0 zone resets 00:12:05.331 slat (usec): min=9, max=5284, avg=122.78, stdev=466.53 00:12:05.331 clat (usec): min=9738, max=26523, avg=16290.27, stdev=4872.22 00:12:05.331 lat (usec): min=10596, max=26543, avg=16413.04, stdev=4889.05 00:12:05.331 clat percentiles (usec): 00:12:05.331 | 1.00th=[10159], 5.00th=[12125], 10.00th=[12256], 20.00th=[12387], 00:12:05.331 | 30.00th=[12518], 40.00th=[12649], 50.00th=[12911], 60.00th=[14484], 00:12:05.331 | 70.00th=[21103], 80.00th=[22676], 90.00th=[23462], 95.00th=[24249], 00:12:05.331 | 99.00th=[25035], 99.50th=[25297], 99.90th=[25822], 99.95th=[26608], 00:12:05.331 | 99.99th=[26608] 00:12:05.331 bw ( KiB/s): min=12040, max=20521, per=27.55%, avg=16280.50, stdev=5996.97, samples=2 00:12:05.331 iops : min= 3010, max= 5130, avg=4070.00, stdev=1499.07, samples=2 00:12:05.331 lat (msec) : 2=0.01%, 10=0.75%, 20=66.99%, 50=32.25% 00:12:05.331 cpu : usr=2.99%, sys=10.97%, ctx=618, majf=0, minf=1 00:12:05.331 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:12:05.331 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:05.331 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:05.331 issued rwts: total=3681,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:05.331 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:05.331 job3: (groupid=0, jobs=1): err= 0: pid=66456: Thu Dec 5 19:31:58 2024 00:12:05.331 read: IOPS=3787, BW=14.8MiB/s (15.5MB/s)(14.9MiB/1005msec) 00:12:05.331 slat (usec): min=4, max=4908, avg=125.58, stdev=467.82 00:12:05.331 clat (usec): min=1444, max=29916, avg=15978.22, stdev=4950.83 00:12:05.331 lat (usec): min=4499, max=30210, avg=16103.80, stdev=4988.49 00:12:05.331 clat percentiles (usec): 00:12:05.331 | 1.00th=[10552], 5.00th=[11994], 10.00th=[12387], 20.00th=[12649], 00:12:05.331 | 30.00th=[12780], 40.00th=[12911], 50.00th=[13173], 60.00th=[14091], 00:12:05.331 | 70.00th=[17171], 80.00th=[22414], 90.00th=[23462], 95.00th=[25822], 00:12:05.331 | 99.00th=[28443], 99.50th=[29754], 99.90th=[30016], 99.95th=[30016], 00:12:05.331 | 99.99th=[30016] 00:12:05.331 write: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec); 0 zone resets 00:12:05.331 slat (usec): min=9, max=5738, avg=120.54, stdev=502.21 00:12:05.331 clat (usec): min=9539, max=29491, avg=16144.57, stdev=4971.57 00:12:05.331 lat (usec): min=9561, max=29507, avg=16265.10, stdev=5009.79 00:12:05.331 clat percentiles (usec): 00:12:05.331 | 1.00th=[10683], 5.00th=[11863], 10.00th=[12125], 20.00th=[12387], 00:12:05.331 | 30.00th=[12518], 40.00th=[12649], 50.00th=[12911], 60.00th=[14353], 00:12:05.331 | 70.00th=[19530], 80.00th=[22152], 90.00th=[23725], 95.00th=[25035], 00:12:05.331 | 99.00th=[27132], 99.50th=[28967], 99.90th=[29230], 99.95th=[29230], 00:12:05.331 | 99.99th=[29492] 00:12:05.331 bw ( KiB/s): min=12288, max=20480, per=27.72%, avg=16384.00, stdev=5792.62, samples=2 00:12:05.331 iops : min= 3072, max= 5120, avg=4096.00, stdev=1448.15, samples=2 00:12:05.331 lat (msec) : 2=0.01%, 10=0.48%, 20=72.49%, 50=27.02% 00:12:05.331 cpu : usr=3.88%, sys=10.46%, ctx=646, majf=0, minf=1 00:12:05.331 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:12:05.331 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:05.331 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:05.331 issued rwts: total=3806,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:05.331 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:05.331 00:12:05.331 Run status group 0 (all jobs): 00:12:05.331 READ: bw=52.8MiB/s (55.3MB/s), 10.5MiB/s-14.8MiB/s (11.0MB/s-15.5MB/s), io=53.0MiB (55.6MB), run=1004-1005msec 00:12:05.331 WRITE: bw=57.7MiB/s (60.5MB/s), 12.0MiB/s-15.9MiB/s (12.5MB/s-16.7MB/s), io=58.0MiB (60.8MB), run=1004-1005msec 00:12:05.331 00:12:05.331 Disk stats (read/write): 00:12:05.331 nvme0n1: ios=2418/2560, merge=0/0, ticks=15856/10014, in_queue=25870, util=87.56% 00:12:05.331 nvme0n2: ios=2733/3072, merge=0/0, ticks=56063/47196, in_queue=103259, util=88.57% 00:12:05.331 nvme0n3: ios=3398/3584, merge=0/0, ticks=12456/11970, in_queue=24426, util=88.84% 00:12:05.331 nvme0n4: ios=3500/3584, merge=0/0, ticks=15838/13928, in_queue=29766, util=89.81% 00:12:05.331 19:31:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:12:05.331 [global] 00:12:05.331 thread=1 00:12:05.331 invalidate=1 00:12:05.331 rw=randwrite 00:12:05.331 time_based=1 00:12:05.331 runtime=1 00:12:05.331 ioengine=libaio 00:12:05.331 direct=1 00:12:05.331 bs=4096 00:12:05.331 iodepth=128 00:12:05.331 norandommap=0 00:12:05.331 numjobs=1 00:12:05.331 00:12:05.331 verify_dump=1 00:12:05.331 verify_backlog=512 00:12:05.331 verify_state_save=0 00:12:05.331 do_verify=1 00:12:05.331 verify=crc32c-intel 00:12:05.331 [job0] 00:12:05.331 filename=/dev/nvme0n1 00:12:05.331 [job1] 00:12:05.331 filename=/dev/nvme0n2 00:12:05.331 [job2] 00:12:05.331 filename=/dev/nvme0n3 00:12:05.331 [job3] 00:12:05.331 filename=/dev/nvme0n4 00:12:05.331 Could not set queue depth (nvme0n1) 00:12:05.331 Could not set queue depth (nvme0n2) 00:12:05.331 Could not set queue depth (nvme0n3) 00:12:05.331 Could not set queue depth (nvme0n4) 00:12:05.331 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:05.331 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:05.331 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:05.331 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:05.331 fio-3.35 00:12:05.331 Starting 4 threads 00:12:06.711 00:12:06.711 job0: (groupid=0, jobs=1): err= 0: pid=66509: Thu Dec 5 19:31:59 2024 00:12:06.711 read: IOPS=5004, BW=19.5MiB/s (20.5MB/s)(19.6MiB/1002msec) 00:12:06.711 slat (usec): min=6, max=5264, avg=98.38, stdev=433.64 00:12:06.711 clat (usec): min=842, max=18762, avg=12868.83, stdev=1494.36 00:12:06.711 lat (usec): min=2204, max=18811, avg=12967.21, stdev=1504.84 00:12:06.711 clat percentiles (usec): 00:12:06.711 | 1.00th=[ 7111], 5.00th=[10421], 10.00th=[11338], 20.00th=[12518], 00:12:06.711 | 30.00th=[12780], 40.00th=[12911], 50.00th=[13042], 60.00th=[13173], 00:12:06.711 | 70.00th=[13304], 80.00th=[13566], 90.00th=[13829], 95.00th=[14877], 00:12:06.711 | 99.00th=[16581], 99.50th=[17171], 99.90th=[17957], 99.95th=[17957], 00:12:06.711 | 99.99th=[18744] 00:12:06.711 write: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec); 0 zone resets 00:12:06.711 slat (usec): min=14, max=5069, avg=90.01, stdev=478.17 00:12:06.711 clat (usec): min=7022, max=18469, avg=12129.81, stdev=1202.14 00:12:06.711 lat (usec): min=7119, max=18541, avg=12219.83, stdev=1282.51 00:12:06.711 clat percentiles (usec): 00:12:06.711 | 1.00th=[ 8848], 5.00th=[10421], 10.00th=[11076], 20.00th=[11600], 00:12:06.711 | 30.00th=[11731], 40.00th=[11863], 50.00th=[11994], 60.00th=[12256], 00:12:06.711 | 70.00th=[12387], 80.00th=[12649], 90.00th=[13173], 95.00th=[13960], 00:12:06.711 | 99.00th=[16319], 99.50th=[17171], 99.90th=[17957], 99.95th=[18482], 00:12:06.711 | 99.99th=[18482] 00:12:06.711 bw ( KiB/s): min=20480, max=20480, per=26.31%, avg=20480.00, stdev= 0.00, samples=2 00:12:06.711 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:12:06.711 lat (usec) : 1000=0.01% 00:12:06.711 lat (msec) : 4=0.28%, 10=3.48%, 20=96.23% 00:12:06.711 cpu : usr=5.39%, sys=16.08%, ctx=360, majf=0, minf=13 00:12:06.711 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:12:06.711 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:06.711 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:06.711 issued rwts: total=5015,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:06.711 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:06.711 job1: (groupid=0, jobs=1): err= 0: pid=66510: Thu Dec 5 19:31:59 2024 00:12:06.711 read: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec) 00:12:06.711 slat (usec): min=7, max=9338, avg=92.34, stdev=540.98 00:12:06.711 clat (usec): min=7449, max=21313, avg=13030.86, stdev=1723.75 00:12:06.711 lat (usec): min=7463, max=24531, avg=13123.20, stdev=1748.03 00:12:06.711 clat percentiles (usec): 00:12:06.711 | 1.00th=[ 8225], 5.00th=[ 9765], 10.00th=[11338], 20.00th=[12387], 00:12:06.711 | 30.00th=[12649], 40.00th=[12911], 50.00th=[13042], 60.00th=[13304], 00:12:06.711 | 70.00th=[13304], 80.00th=[13566], 90.00th=[14091], 95.00th=[15139], 00:12:06.711 | 99.00th=[19530], 99.50th=[20317], 99.90th=[20579], 99.95th=[20579], 00:12:06.711 | 99.99th=[21365] 00:12:06.711 write: IOPS=5184, BW=20.3MiB/s (21.2MB/s)(20.3MiB/1004msec); 0 zone resets 00:12:06.711 slat (usec): min=10, max=8403, avg=94.41, stdev=563.22 00:12:06.711 clat (usec): min=539, max=16332, avg=11623.26, stdev=1561.78 00:12:06.711 lat (usec): min=4398, max=16372, avg=11717.68, stdev=1475.86 00:12:06.711 clat percentiles (usec): 00:12:06.711 | 1.00th=[ 5342], 5.00th=[ 9503], 10.00th=[ 9896], 20.00th=[11076], 00:12:06.711 | 30.00th=[11469], 40.00th=[11600], 50.00th=[11863], 60.00th=[12125], 00:12:06.711 | 70.00th=[12256], 80.00th=[12518], 90.00th=[12780], 95.00th=[13566], 00:12:06.711 | 99.00th=[16057], 99.50th=[16188], 99.90th=[16319], 99.95th=[16319], 00:12:06.711 | 99.99th=[16319] 00:12:06.711 bw ( KiB/s): min=20480, max=20521, per=26.33%, avg=20500.50, stdev=28.99, samples=2 00:12:06.711 iops : min= 5120, max= 5130, avg=5125.00, stdev= 7.07, samples=2 00:12:06.711 lat (usec) : 750=0.01% 00:12:06.711 lat (msec) : 10=10.02%, 20=89.69%, 50=0.28% 00:12:06.711 cpu : usr=3.69%, sys=13.66%, ctx=232, majf=0, minf=9 00:12:06.711 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:12:06.711 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:06.711 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:06.711 issued rwts: total=5120,5205,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:06.711 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:06.711 job2: (groupid=0, jobs=1): err= 0: pid=66511: Thu Dec 5 19:31:59 2024 00:12:06.711 read: IOPS=4088, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1002msec) 00:12:06.711 slat (usec): min=7, max=5479, avg=112.78, stdev=549.44 00:12:06.711 clat (usec): min=331, max=16867, avg=14862.61, stdev=742.44 00:12:06.711 lat (usec): min=3538, max=16875, avg=14975.39, stdev=489.49 00:12:06.711 clat percentiles (usec): 00:12:06.711 | 1.00th=[11731], 5.00th=[14222], 10.00th=[14353], 20.00th=[14615], 00:12:06.711 | 30.00th=[14746], 40.00th=[14877], 50.00th=[15008], 60.00th=[15008], 00:12:06.711 | 70.00th=[15139], 80.00th=[15270], 90.00th=[15401], 95.00th=[15533], 00:12:06.711 | 99.00th=[16712], 99.50th=[16909], 99.90th=[16909], 99.95th=[16909], 00:12:06.711 | 99.99th=[16909] 00:12:06.711 write: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec); 0 zone resets 00:12:06.711 slat (usec): min=9, max=8491, avg=109.45, stdev=497.29 00:12:06.711 clat (usec): min=3541, max=20773, avg=14228.30, stdev=1514.36 00:12:06.711 lat (usec): min=3560, max=20813, avg=14337.75, stdev=1438.98 00:12:06.711 clat percentiles (usec): 00:12:06.711 | 1.00th=[ 7570], 5.00th=[11994], 10.00th=[13829], 20.00th=[13960], 00:12:06.711 | 30.00th=[14091], 40.00th=[14222], 50.00th=[14222], 60.00th=[14353], 00:12:06.711 | 70.00th=[14484], 80.00th=[14615], 90.00th=[14746], 95.00th=[14877], 00:12:06.711 | 99.00th=[20579], 99.50th=[20579], 99.90th=[20841], 99.95th=[20841], 00:12:06.711 | 99.99th=[20841] 00:12:06.711 bw ( KiB/s): min=17194, max=18688, per=23.04%, avg=17941.00, stdev=1056.42, samples=2 00:12:06.711 iops : min= 4298, max= 4672, avg=4485.00, stdev=264.46, samples=2 00:12:06.711 lat (usec) : 500=0.01% 00:12:06.711 lat (msec) : 4=0.23%, 10=0.51%, 20=98.54%, 50=0.71% 00:12:06.711 cpu : usr=4.20%, sys=11.69%, ctx=290, majf=0, minf=23 00:12:06.711 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:12:06.711 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:06.711 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:06.711 issued rwts: total=4097,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:06.712 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:06.712 job3: (groupid=0, jobs=1): err= 0: pid=66512: Thu Dec 5 19:31:59 2024 00:12:06.712 read: IOPS=4147, BW=16.2MiB/s (17.0MB/s)(16.2MiB/1003msec) 00:12:06.712 slat (usec): min=6, max=6962, avg=116.13, stdev=552.39 00:12:06.712 clat (usec): min=895, max=21211, avg=14866.39, stdev=1738.72 00:12:06.712 lat (usec): min=3842, max=26645, avg=14982.52, stdev=1757.72 00:12:06.712 clat percentiles (usec): 00:12:06.712 | 1.00th=[ 9765], 5.00th=[12387], 10.00th=[13304], 20.00th=[14484], 00:12:06.712 | 30.00th=[14746], 40.00th=[14877], 50.00th=[15008], 60.00th=[15139], 00:12:06.712 | 70.00th=[15270], 80.00th=[15533], 90.00th=[16057], 95.00th=[17695], 00:12:06.712 | 99.00th=[19792], 99.50th=[20317], 99.90th=[21103], 99.95th=[21103], 00:12:06.712 | 99.99th=[21103] 00:12:06.712 write: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec); 0 zone resets 00:12:06.712 slat (usec): min=10, max=7383, avg=104.66, stdev=614.23 00:12:06.712 clat (usec): min=6300, max=21081, avg=14100.83, stdev=1511.84 00:12:06.712 lat (usec): min=6353, max=21741, avg=14205.50, stdev=1613.15 00:12:06.712 clat percentiles (usec): 00:12:06.712 | 1.00th=[ 9503], 5.00th=[11994], 10.00th=[12911], 20.00th=[13435], 00:12:06.712 | 30.00th=[13698], 40.00th=[13829], 50.00th=[14091], 60.00th=[14222], 00:12:06.712 | 70.00th=[14353], 80.00th=[14877], 90.00th=[15401], 95.00th=[16909], 00:12:06.712 | 99.00th=[19530], 99.50th=[19792], 99.90th=[21103], 99.95th=[21103], 00:12:06.712 | 99.99th=[21103] 00:12:06.712 bw ( KiB/s): min=17683, max=18704, per=23.37%, avg=18193.50, stdev=721.96, samples=2 00:12:06.712 iops : min= 4420, max= 4676, avg=4548.00, stdev=181.02, samples=2 00:12:06.712 lat (usec) : 1000=0.01% 00:12:06.712 lat (msec) : 4=0.11%, 10=1.28%, 20=97.91%, 50=0.68% 00:12:06.712 cpu : usr=4.29%, sys=12.18%, ctx=308, majf=0, minf=5 00:12:06.712 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:12:06.712 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:06.712 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:06.712 issued rwts: total=4160,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:06.712 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:06.712 00:12:06.712 Run status group 0 (all jobs): 00:12:06.712 READ: bw=71.6MiB/s (75.0MB/s), 16.0MiB/s-19.9MiB/s (16.7MB/s-20.9MB/s), io=71.8MiB (75.3MB), run=1002-1004msec 00:12:06.712 WRITE: bw=76.0MiB/s (79.7MB/s), 17.9MiB/s-20.3MiB/s (18.8MB/s-21.2MB/s), io=76.3MiB (80.0MB), run=1002-1004msec 00:12:06.712 00:12:06.712 Disk stats (read/write): 00:12:06.712 nvme0n1: ios=4209/4608, merge=0/0, ticks=25650/22922, in_queue=48572, util=88.08% 00:12:06.712 nvme0n2: ios=4260/4608, merge=0/0, ticks=52215/50274, in_queue=102489, util=88.78% 00:12:06.712 nvme0n3: ios=3584/3936, merge=0/0, ticks=11965/12282, in_queue=24247, util=88.58% 00:12:06.712 nvme0n4: ios=3584/3999, merge=0/0, ticks=26021/23726, in_queue=49747, util=89.75% 00:12:06.712 19:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:12:06.712 19:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=66531 00:12:06.712 19:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:12:06.712 19:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:12:06.712 [global] 00:12:06.712 thread=1 00:12:06.712 invalidate=1 00:12:06.712 rw=read 00:12:06.712 time_based=1 00:12:06.712 runtime=10 00:12:06.712 ioengine=libaio 00:12:06.712 direct=1 00:12:06.712 bs=4096 00:12:06.712 iodepth=1 00:12:06.712 norandommap=1 00:12:06.712 numjobs=1 00:12:06.712 00:12:06.712 [job0] 00:12:06.712 filename=/dev/nvme0n1 00:12:06.712 [job1] 00:12:06.712 filename=/dev/nvme0n2 00:12:06.712 [job2] 00:12:06.712 filename=/dev/nvme0n3 00:12:06.712 [job3] 00:12:06.712 filename=/dev/nvme0n4 00:12:06.712 Could not set queue depth (nvme0n1) 00:12:06.712 Could not set queue depth (nvme0n2) 00:12:06.712 Could not set queue depth (nvme0n3) 00:12:06.712 Could not set queue depth (nvme0n4) 00:12:06.712 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:06.712 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:06.712 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:06.712 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:06.712 fio-3.35 00:12:06.712 Starting 4 threads 00:12:10.016 19:32:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:12:10.016 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=43294720, buflen=4096 00:12:10.016 fio: pid=66578, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:12:10.016 19:32:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:12:10.016 fio: pid=66577, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:12:10.016 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=68186112, buflen=4096 00:12:10.016 19:32:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:10.016 19:32:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:12:10.275 fio: pid=66575, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:12:10.275 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=50794496, buflen=4096 00:12:10.275 19:32:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:10.275 19:32:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:12:10.534 fio: pid=66576, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:12:10.534 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=16650240, buflen=4096 00:12:10.794 00:12:10.794 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66575: Thu Dec 5 19:32:03 2024 00:12:10.794 read: IOPS=3563, BW=13.9MiB/s (14.6MB/s)(48.4MiB/3480msec) 00:12:10.794 slat (usec): min=7, max=14164, avg=16.47, stdev=174.58 00:12:10.794 clat (usec): min=130, max=2883, avg=262.99, stdev=65.18 00:12:10.794 lat (usec): min=141, max=14347, avg=279.46, stdev=186.24 00:12:10.794 clat percentiles (usec): 00:12:10.794 | 1.00th=[ 143], 5.00th=[ 169], 10.00th=[ 215], 20.00th=[ 247], 00:12:10.794 | 30.00th=[ 255], 40.00th=[ 262], 50.00th=[ 265], 60.00th=[ 269], 00:12:10.794 | 70.00th=[ 277], 80.00th=[ 281], 90.00th=[ 293], 95.00th=[ 306], 00:12:10.794 | 99.00th=[ 408], 99.50th=[ 424], 99.90th=[ 807], 99.95th=[ 1680], 00:12:10.794 | 99.99th=[ 2868] 00:12:10.794 bw ( KiB/s): min=12520, max=14392, per=21.81%, avg=13894.67, stdev=695.74, samples=6 00:12:10.794 iops : min= 3130, max= 3598, avg=3473.67, stdev=173.94, samples=6 00:12:10.794 lat (usec) : 250=22.67%, 500=77.15%, 750=0.06%, 1000=0.02% 00:12:10.794 lat (msec) : 2=0.06%, 4=0.02% 00:12:10.794 cpu : usr=1.24%, sys=4.08%, ctx=12415, majf=0, minf=1 00:12:10.794 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:10.794 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:10.794 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:10.794 issued rwts: total=12402,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:10.794 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:10.794 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66576: Thu Dec 5 19:32:03 2024 00:12:10.794 read: IOPS=5422, BW=21.2MiB/s (22.2MB/s)(79.9MiB/3771msec) 00:12:10.794 slat (usec): min=7, max=12695, avg=16.05, stdev=160.64 00:12:10.794 clat (usec): min=122, max=3686, avg=167.08, stdev=55.07 00:12:10.794 lat (usec): min=133, max=12891, avg=183.13, stdev=170.72 00:12:10.794 clat percentiles (usec): 00:12:10.794 | 1.00th=[ 137], 5.00th=[ 143], 10.00th=[ 145], 20.00th=[ 151], 00:12:10.794 | 30.00th=[ 155], 40.00th=[ 157], 50.00th=[ 161], 60.00th=[ 165], 00:12:10.794 | 70.00th=[ 169], 80.00th=[ 174], 90.00th=[ 186], 95.00th=[ 212], 00:12:10.794 | 99.00th=[ 289], 99.50th=[ 310], 99.90th=[ 523], 99.95th=[ 938], 00:12:10.794 | 99.99th=[ 2900] 00:12:10.794 bw ( KiB/s): min=16650, max=23208, per=34.16%, avg=21764.86, stdev=2328.22, samples=7 00:12:10.794 iops : min= 4162, max= 5802, avg=5441.14, stdev=582.24, samples=7 00:12:10.794 lat (usec) : 250=97.40%, 500=2.48%, 750=0.03%, 1000=0.02% 00:12:10.794 lat (msec) : 2=0.03%, 4=0.02% 00:12:10.794 cpu : usr=1.70%, sys=6.34%, ctx=20470, majf=0, minf=2 00:12:10.794 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:10.794 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:10.794 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:10.794 issued rwts: total=20450,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:10.794 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:10.794 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66577: Thu Dec 5 19:32:03 2024 00:12:10.794 read: IOPS=5162, BW=20.2MiB/s (21.1MB/s)(65.0MiB/3225msec) 00:12:10.794 slat (usec): min=7, max=10405, avg=13.58, stdev=96.35 00:12:10.794 clat (usec): min=147, max=2289, avg=179.12, stdev=38.77 00:12:10.794 lat (usec): min=161, max=10604, avg=192.70, stdev=104.29 00:12:10.794 clat percentiles (usec): 00:12:10.794 | 1.00th=[ 155], 5.00th=[ 159], 10.00th=[ 161], 20.00th=[ 165], 00:12:10.794 | 30.00th=[ 169], 40.00th=[ 172], 50.00th=[ 174], 60.00th=[ 178], 00:12:10.794 | 70.00th=[ 180], 80.00th=[ 184], 90.00th=[ 192], 95.00th=[ 202], 00:12:10.794 | 99.00th=[ 306], 99.50th=[ 322], 99.90th=[ 562], 99.95th=[ 848], 00:12:10.794 | 99.99th=[ 1614] 00:12:10.794 bw ( KiB/s): min=20584, max=21424, per=33.26%, avg=21194.67, stdev=306.13, samples=6 00:12:10.794 iops : min= 5146, max= 5356, avg=5298.67, stdev=76.53, samples=6 00:12:10.794 lat (usec) : 250=96.77%, 500=3.09%, 750=0.06%, 1000=0.04% 00:12:10.795 lat (msec) : 2=0.02%, 4=0.01% 00:12:10.795 cpu : usr=1.36%, sys=5.43%, ctx=16657, majf=0, minf=2 00:12:10.795 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:10.795 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:10.795 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:10.795 issued rwts: total=16648,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:10.795 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:10.795 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66578: Thu Dec 5 19:32:03 2024 00:12:10.795 read: IOPS=3564, BW=13.9MiB/s (14.6MB/s)(41.3MiB/2966msec) 00:12:10.795 slat (usec): min=10, max=131, avg=14.59, stdev= 5.06 00:12:10.795 clat (usec): min=155, max=2173, avg=264.69, stdev=36.29 00:12:10.795 lat (usec): min=167, max=2198, avg=279.28, stdev=36.12 00:12:10.795 clat percentiles (usec): 00:12:10.795 | 1.00th=[ 192], 5.00th=[ 235], 10.00th=[ 243], 20.00th=[ 249], 00:12:10.795 | 30.00th=[ 255], 40.00th=[ 260], 50.00th=[ 265], 60.00th=[ 269], 00:12:10.795 | 70.00th=[ 273], 80.00th=[ 281], 90.00th=[ 289], 95.00th=[ 297], 00:12:10.795 | 99.00th=[ 318], 99.50th=[ 330], 99.90th=[ 537], 99.95th=[ 783], 00:12:10.795 | 99.99th=[ 1614] 00:12:10.795 bw ( KiB/s): min=14168, max=14480, per=22.43%, avg=14292.80, stdev=135.95, samples=5 00:12:10.795 iops : min= 3542, max= 3620, avg=3573.20, stdev=33.99, samples=5 00:12:10.795 lat (usec) : 250=20.82%, 500=79.06%, 750=0.05%, 1000=0.04% 00:12:10.795 lat (msec) : 2=0.02%, 4=0.01% 00:12:10.795 cpu : usr=0.94%, sys=4.55%, ctx=10573, majf=0, minf=2 00:12:10.795 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:10.795 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:10.795 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:10.795 issued rwts: total=10571,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:10.795 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:10.795 00:12:10.795 Run status group 0 (all jobs): 00:12:10.795 READ: bw=62.2MiB/s (65.2MB/s), 13.9MiB/s-21.2MiB/s (14.6MB/s-22.2MB/s), io=235MiB (246MB), run=2966-3771msec 00:12:10.795 00:12:10.795 Disk stats (read/write): 00:12:10.795 nvme0n1: ios=11913/0, merge=0/0, ticks=3163/0, in_queue=3163, util=95.31% 00:12:10.795 nvme0n2: ios=19523/0, merge=0/0, ticks=3273/0, in_queue=3273, util=95.43% 00:12:10.795 nvme0n3: ios=16299/0, merge=0/0, ticks=2920/0, in_queue=2920, util=96.31% 00:12:10.795 nvme0n4: ios=10222/0, merge=0/0, ticks=2764/0, in_queue=2764, util=96.66% 00:12:10.795 19:32:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:10.795 19:32:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:12:11.054 19:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:11.054 19:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:12:11.312 19:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:11.312 19:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:12:11.571 19:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:11.571 19:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:12:11.834 19:32:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:11.834 19:32:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:12:12.101 19:32:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:12:12.101 19:32:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 66531 00:12:12.101 19:32:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:12:12.101 19:32:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:12.101 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:12.101 19:32:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:12.101 19:32:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:12:12.101 19:32:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:12.101 19:32:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:12.101 19:32:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:12.101 19:32:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:12.101 19:32:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:12:12.101 19:32:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:12:12.101 19:32:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:12:12.101 nvmf hotplug test: fio failed as expected 00:12:12.101 19:32:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:12.360 19:32:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:12:12.360 19:32:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:12:12.360 19:32:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:12:12.360 19:32:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:12:12.360 19:32:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:12:12.360 19:32:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:12.360 19:32:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:12:12.360 19:32:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:12.360 19:32:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:12:12.360 19:32:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:12.360 19:32:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:12.360 rmmod nvme_tcp 00:12:12.360 rmmod nvme_fabrics 00:12:12.360 rmmod nvme_keyring 00:12:12.360 19:32:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:12.360 19:32:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:12:12.360 19:32:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:12:12.360 19:32:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 66138 ']' 00:12:12.360 19:32:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 66138 00:12:12.360 19:32:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 66138 ']' 00:12:12.360 19:32:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 66138 00:12:12.618 19:32:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:12:12.618 19:32:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:12.618 19:32:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66138 00:12:12.618 killing process with pid 66138 00:12:12.618 19:32:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:12.618 19:32:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:12.619 19:32:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66138' 00:12:12.619 19:32:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 66138 00:12:12.619 19:32:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 66138 00:12:12.619 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:12.619 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:12.619 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:12.619 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:12:12.619 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:12.619 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:12:12.619 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:12:12.619 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:12.619 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:12:12.619 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:12:12.894 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:12:12.894 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:12:12.894 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:12:12.894 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:12:12.894 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:12:12.894 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:12:12.894 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:12:12.894 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:12:12.894 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:12:12.894 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:12:12.894 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:12.894 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:12.894 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:12:12.894 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:12.894 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:12.894 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:12.894 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@300 -- # return 0 00:12:12.894 00:12:12.894 real 0m20.535s 00:12:12.894 user 1m17.045s 00:12:12.894 sys 0m10.401s 00:12:12.894 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:12.894 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:12.894 ************************************ 00:12:12.894 END TEST nvmf_fio_target 00:12:12.894 ************************************ 00:12:12.895 19:32:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:12:12.895 19:32:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:12.895 19:32:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:12.895 19:32:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:13.154 ************************************ 00:12:13.154 START TEST nvmf_bdevio 00:12:13.154 ************************************ 00:12:13.154 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:12:13.154 * Looking for test storage... 00:12:13.154 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:13.154 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:13.154 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:12:13.154 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:13.154 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:13.154 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:13.154 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:13.154 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:13.154 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:12:13.154 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:12:13.154 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:12:13.154 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:12:13.154 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:12:13.154 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:12:13.154 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:12:13.154 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:13.154 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:12:13.154 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:12:13.154 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:13.154 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:13.154 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:12:13.154 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:12:13.154 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:13.154 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:12:13.154 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:12:13.154 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:12:13.154 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:12:13.155 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:13.155 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:12:13.155 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:12:13.155 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:13.155 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:13.155 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:12:13.155 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:13.155 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:13.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:13.155 --rc genhtml_branch_coverage=1 00:12:13.155 --rc genhtml_function_coverage=1 00:12:13.155 --rc genhtml_legend=1 00:12:13.155 --rc geninfo_all_blocks=1 00:12:13.155 --rc geninfo_unexecuted_blocks=1 00:12:13.155 00:12:13.155 ' 00:12:13.155 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:13.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:13.155 --rc genhtml_branch_coverage=1 00:12:13.155 --rc genhtml_function_coverage=1 00:12:13.155 --rc genhtml_legend=1 00:12:13.155 --rc geninfo_all_blocks=1 00:12:13.155 --rc geninfo_unexecuted_blocks=1 00:12:13.155 00:12:13.155 ' 00:12:13.155 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:13.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:13.155 --rc genhtml_branch_coverage=1 00:12:13.155 --rc genhtml_function_coverage=1 00:12:13.155 --rc genhtml_legend=1 00:12:13.155 --rc geninfo_all_blocks=1 00:12:13.155 --rc geninfo_unexecuted_blocks=1 00:12:13.155 00:12:13.155 ' 00:12:13.155 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:13.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:13.155 --rc genhtml_branch_coverage=1 00:12:13.155 --rc genhtml_function_coverage=1 00:12:13.155 --rc genhtml_legend=1 00:12:13.155 --rc geninfo_all_blocks=1 00:12:13.155 --rc geninfo_unexecuted_blocks=1 00:12:13.155 00:12:13.155 ' 00:12:13.155 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:13.155 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:12:13.155 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:13.155 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:13.155 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:13.155 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:13.155 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:13.155 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:13.155 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:13.155 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:13.155 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:13.155 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:13.155 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:12:13.155 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:12:13.155 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:13.155 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:13.155 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:13.155 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:13.155 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:13.155 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:12:13.155 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:13.155 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:13.155 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:13.155 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.155 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.155 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.155 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:12:13.155 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.155 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:12:13.155 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:13.155 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:13.155 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:13.155 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:13.155 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:13.155 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:13.155 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:13.155 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:13.155 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:13.155 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:13.155 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:13.156 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:13.156 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:12:13.156 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:13.156 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:13.156 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:13.156 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:13.156 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:13.156 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:13.156 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:13.156 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:13.156 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:12:13.156 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:12:13.156 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:12:13.156 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:12:13.156 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:12:13.156 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@460 -- # nvmf_veth_init 00:12:13.156 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:13.156 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:12:13.156 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:12:13.156 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:13.156 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:13.156 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:12:13.156 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:13.156 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:12:13.156 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:13.156 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:12:13.156 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:13.156 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:13.156 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:13.156 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:13.156 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:13.156 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:13.156 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:12:13.156 Cannot find device "nvmf_init_br" 00:12:13.156 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:12:13.156 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:12:13.156 Cannot find device "nvmf_init_br2" 00:12:13.156 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:12:13.156 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:12:13.156 Cannot find device "nvmf_tgt_br" 00:12:13.156 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # true 00:12:13.156 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:12:13.156 Cannot find device "nvmf_tgt_br2" 00:12:13.419 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # true 00:12:13.419 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:12:13.419 Cannot find device "nvmf_init_br" 00:12:13.419 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # true 00:12:13.419 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:12:13.419 Cannot find device "nvmf_init_br2" 00:12:13.419 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # true 00:12:13.419 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:12:13.419 Cannot find device "nvmf_tgt_br" 00:12:13.419 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # true 00:12:13.419 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:12:13.419 Cannot find device "nvmf_tgt_br2" 00:12:13.419 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # true 00:12:13.419 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:12:13.419 Cannot find device "nvmf_br" 00:12:13.419 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # true 00:12:13.419 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:12:13.419 Cannot find device "nvmf_init_if" 00:12:13.419 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # true 00:12:13.419 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:12:13.419 Cannot find device "nvmf_init_if2" 00:12:13.419 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # true 00:12:13.419 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:13.419 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:13.419 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # true 00:12:13.419 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:13.419 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:13.419 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # true 00:12:13.419 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:12:13.419 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:13.419 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:12:13.419 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:13.419 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:13.419 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:13.419 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:13.419 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:13.419 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:12:13.419 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:12:13.419 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:12:13.419 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:12:13.419 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:12:13.419 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:12:13.419 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:12:13.419 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:12:13.419 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:12:13.419 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:13.419 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:13.419 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:13.419 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:12:13.419 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:12:13.419 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:12:13.419 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:12:13.419 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:13.419 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:13.687 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:13.687 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:12:13.687 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:12:13.688 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:12:13.688 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:13.688 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:13.688 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:12:13.688 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:13.688 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.070 ms 00:12:13.688 00:12:13.688 --- 10.0.0.3 ping statistics --- 00:12:13.688 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:13.688 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:12:13.688 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:12:13.688 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:13.688 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.040 ms 00:12:13.688 00:12:13.688 --- 10.0.0.4 ping statistics --- 00:12:13.688 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:13.688 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:12:13.688 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:13.688 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:13.688 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:12:13.688 00:12:13.688 --- 10.0.0.1 ping statistics --- 00:12:13.688 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:13.688 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:12:13.688 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:12:13.688 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:13.688 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:12:13.688 00:12:13.688 --- 10.0.0.2 ping statistics --- 00:12:13.688 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:13.688 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:12:13.688 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:13.688 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@461 -- # return 0 00:12:13.688 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:13.688 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:13.688 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:13.688 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:13.688 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:13.688 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:13.688 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:13.688 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:12:13.688 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:13.688 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:13.688 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:13.688 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=66893 00:12:13.688 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:12:13.688 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 66893 00:12:13.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:13.688 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 66893 ']' 00:12:13.688 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:13.688 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:13.688 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:13.688 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:13.688 19:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:13.688 [2024-12-05 19:32:06.984255] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:12:13.688 [2024-12-05 19:32:06.984694] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:13.946 [2024-12-05 19:32:07.142177] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:13.946 [2024-12-05 19:32:07.209650] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:13.946 [2024-12-05 19:32:07.210284] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:13.946 [2024-12-05 19:32:07.210314] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:13.946 [2024-12-05 19:32:07.210325] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:13.946 [2024-12-05 19:32:07.210334] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:13.946 [2024-12-05 19:32:07.212001] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:12:13.946 [2024-12-05 19:32:07.212188] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:12:13.946 [2024-12-05 19:32:07.212652] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:12:13.946 [2024-12-05 19:32:07.212668] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:13.946 [2024-12-05 19:32:07.272568] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:14.916 19:32:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:14.916 19:32:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:12:14.916 19:32:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:14.916 19:32:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:14.916 19:32:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:14.916 19:32:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:14.916 19:32:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:14.916 19:32:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.916 19:32:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:14.916 [2024-12-05 19:32:08.008042] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:14.916 19:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.916 19:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:14.916 19:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.916 19:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:14.916 Malloc0 00:12:14.916 19:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.916 19:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:14.916 19:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.916 19:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:14.916 19:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.916 19:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:14.916 19:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.916 19:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:14.916 19:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.916 19:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:12:14.916 19:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.916 19:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:14.916 [2024-12-05 19:32:08.073034] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:14.916 19:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.916 19:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:12:14.916 19:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:12:14.916 19:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:12:14.916 19:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:12:14.916 19:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:12:14.916 19:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:12:14.916 { 00:12:14.916 "params": { 00:12:14.916 "name": "Nvme$subsystem", 00:12:14.916 "trtype": "$TEST_TRANSPORT", 00:12:14.916 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:14.916 "adrfam": "ipv4", 00:12:14.916 "trsvcid": "$NVMF_PORT", 00:12:14.916 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:14.916 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:14.916 "hdgst": ${hdgst:-false}, 00:12:14.916 "ddgst": ${ddgst:-false} 00:12:14.916 }, 00:12:14.916 "method": "bdev_nvme_attach_controller" 00:12:14.916 } 00:12:14.916 EOF 00:12:14.916 )") 00:12:14.916 19:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:12:14.916 19:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:12:14.916 19:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:12:14.916 19:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:12:14.916 "params": { 00:12:14.916 "name": "Nvme1", 00:12:14.916 "trtype": "tcp", 00:12:14.916 "traddr": "10.0.0.3", 00:12:14.916 "adrfam": "ipv4", 00:12:14.916 "trsvcid": "4420", 00:12:14.916 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:14.916 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:14.916 "hdgst": false, 00:12:14.916 "ddgst": false 00:12:14.916 }, 00:12:14.916 "method": "bdev_nvme_attach_controller" 00:12:14.916 }' 00:12:14.916 [2024-12-05 19:32:08.133979] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:12:14.916 [2024-12-05 19:32:08.134071] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66935 ] 00:12:14.916 [2024-12-05 19:32:08.287367] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:15.174 [2024-12-05 19:32:08.354756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:15.175 [2024-12-05 19:32:08.354902] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:15.175 [2024-12-05 19:32:08.354911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:15.175 [2024-12-05 19:32:08.421609] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:15.175 I/O targets: 00:12:15.175 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:12:15.175 00:12:15.175 00:12:15.175 CUnit - A unit testing framework for C - Version 2.1-3 00:12:15.175 http://cunit.sourceforge.net/ 00:12:15.175 00:12:15.175 00:12:15.175 Suite: bdevio tests on: Nvme1n1 00:12:15.175 Test: blockdev write read block ...passed 00:12:15.175 Test: blockdev write zeroes read block ...passed 00:12:15.175 Test: blockdev write zeroes read no split ...passed 00:12:15.175 Test: blockdev write zeroes read split ...passed 00:12:15.175 Test: blockdev write zeroes read split partial ...passed 00:12:15.175 Test: blockdev reset ...[2024-12-05 19:32:08.572663] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:12:15.175 [2024-12-05 19:32:08.572767] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e26190 (9): Bad file descriptor 00:12:15.175 passed 00:12:15.175 Test: blockdev write read 8 blocks ...[2024-12-05 19:32:08.589382] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:12:15.175 passed 00:12:15.175 Test: blockdev write read size > 128k ...passed 00:12:15.175 Test: blockdev write read invalid size ...passed 00:12:15.175 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:15.175 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:15.175 Test: blockdev write read max offset ...passed 00:12:15.175 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:15.175 Test: blockdev writev readv 8 blocks ...passed 00:12:15.175 Test: blockdev writev readv 30 x 1block ...passed 00:12:15.175 Test: blockdev writev readv block ...passed 00:12:15.175 Test: blockdev writev readv size > 128k ...passed 00:12:15.175 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:15.175 Test: blockdev comparev and writev ...[2024-12-05 19:32:08.597212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:15.175 [2024-12-05 19:32:08.597376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:12:15.175 [2024-12-05 19:32:08.597404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:15.175 [2024-12-05 19:32:08.597417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:12:15.175 [2024-12-05 19:32:08.597714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:15.175 [2024-12-05 19:32:08.597733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:12:15.175 [2024-12-05 19:32:08.597751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:15.175 [2024-12-05 19:32:08.597762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:12:15.175 [2024-12-05 19:32:08.598051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:15.175 [2024-12-05 19:32:08.598068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:12:15.175 [2024-12-05 19:32:08.598086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:15.175 [2024-12-05 19:32:08.598098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:12:15.175 [2024-12-05 19:32:08.598394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:15.175 [2024-12-05 19:32:08.598412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:12:15.175 [2024-12-05 19:32:08.598429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:15.175 [2024-12-05 19:32:08.598441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:12:15.175 passed 00:12:15.175 Test: blockdev nvme passthru rw ...passed 00:12:15.175 Test: blockdev nvme passthru vendor specific ...[2024-12-05 19:32:08.599443] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:15.175 [2024-12-05 19:32:08.599472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:12:15.175 [2024-12-05 19:32:08.599579] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:15.175 [2024-12-05 19:32:08.599596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:12:15.175 [2024-12-05 19:32:08.599703] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:15.175 [2024-12-05 19:32:08.599720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:12:15.175 passed 00:12:15.175 Test: blockdev nvme admin passthru ...[2024-12-05 19:32:08.599825] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:15.175 [2024-12-05 19:32:08.599848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:12:15.433 passed 00:12:15.433 Test: blockdev copy ...passed 00:12:15.433 00:12:15.433 Run Summary: Type Total Ran Passed Failed Inactive 00:12:15.433 suites 1 1 n/a 0 0 00:12:15.433 tests 23 23 23 0 0 00:12:15.433 asserts 152 152 152 0 n/a 00:12:15.433 00:12:15.433 Elapsed time = 0.147 seconds 00:12:15.433 19:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:15.433 19:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.433 19:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:15.433 19:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.433 19:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:12:15.433 19:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:12:15.433 19:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:15.433 19:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:12:15.433 19:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:15.433 19:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:12:15.433 19:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:15.433 19:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:15.433 rmmod nvme_tcp 00:12:15.433 rmmod nvme_fabrics 00:12:15.691 rmmod nvme_keyring 00:12:15.691 19:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:15.691 19:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:12:15.691 19:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:12:15.691 19:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 66893 ']' 00:12:15.691 19:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 66893 00:12:15.691 19:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 66893 ']' 00:12:15.691 19:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 66893 00:12:15.691 19:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:12:15.691 19:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:15.691 19:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66893 00:12:15.691 killing process with pid 66893 00:12:15.691 19:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:12:15.691 19:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:12:15.691 19:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66893' 00:12:15.691 19:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 66893 00:12:15.691 19:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 66893 00:12:15.950 19:32:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:15.950 19:32:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:15.950 19:32:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:15.950 19:32:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:12:15.950 19:32:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:15.950 19:32:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:12:15.950 19:32:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:12:15.950 19:32:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:15.950 19:32:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:12:15.950 19:32:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:12:15.950 19:32:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:12:15.950 19:32:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:12:15.950 19:32:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:12:15.950 19:32:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:12:15.950 19:32:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:12:15.950 19:32:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:12:15.950 19:32:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:12:15.950 19:32:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:12:15.950 19:32:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:12:15.950 19:32:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:12:15.950 19:32:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:15.950 19:32:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:16.208 19:32:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@246 -- # remove_spdk_ns 00:12:16.208 19:32:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:16.208 19:32:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:16.208 19:32:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:16.208 19:32:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@300 -- # return 0 00:12:16.208 00:12:16.208 real 0m3.094s 00:12:16.208 user 0m9.272s 00:12:16.208 sys 0m0.872s 00:12:16.208 19:32:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:16.208 ************************************ 00:12:16.208 END TEST nvmf_bdevio 00:12:16.208 ************************************ 00:12:16.208 19:32:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:16.208 19:32:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:12:16.208 ************************************ 00:12:16.208 END TEST nvmf_target_core 00:12:16.208 ************************************ 00:12:16.208 00:12:16.208 real 2m35.766s 00:12:16.208 user 6m49.986s 00:12:16.208 sys 0m53.902s 00:12:16.208 19:32:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:16.208 19:32:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:16.208 19:32:09 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:12:16.208 19:32:09 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:16.208 19:32:09 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:16.208 19:32:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:16.208 ************************************ 00:12:16.208 START TEST nvmf_target_extra 00:12:16.208 ************************************ 00:12:16.208 19:32:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:12:16.208 * Looking for test storage... 00:12:16.208 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:12:16.208 19:32:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:16.208 19:32:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lcov --version 00:12:16.208 19:32:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:16.467 19:32:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:16.467 19:32:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:16.467 19:32:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:16.467 19:32:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:16.467 19:32:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:12:16.468 19:32:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:12:16.468 19:32:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:12:16.468 19:32:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:12:16.468 19:32:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:12:16.468 19:32:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:12:16.468 19:32:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:12:16.468 19:32:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:16.468 19:32:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:12:16.468 19:32:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:12:16.468 19:32:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:16.468 19:32:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:16.468 19:32:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:12:16.468 19:32:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:12:16.468 19:32:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:16.468 19:32:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:12:16.468 19:32:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:12:16.468 19:32:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:12:16.468 19:32:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:12:16.468 19:32:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:16.468 19:32:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:12:16.468 19:32:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:12:16.468 19:32:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:16.468 19:32:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:16.468 19:32:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:12:16.468 19:32:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:16.468 19:32:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:16.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:16.468 --rc genhtml_branch_coverage=1 00:12:16.468 --rc genhtml_function_coverage=1 00:12:16.468 --rc genhtml_legend=1 00:12:16.468 --rc geninfo_all_blocks=1 00:12:16.468 --rc geninfo_unexecuted_blocks=1 00:12:16.468 00:12:16.468 ' 00:12:16.468 19:32:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:16.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:16.468 --rc genhtml_branch_coverage=1 00:12:16.468 --rc genhtml_function_coverage=1 00:12:16.468 --rc genhtml_legend=1 00:12:16.468 --rc geninfo_all_blocks=1 00:12:16.468 --rc geninfo_unexecuted_blocks=1 00:12:16.468 00:12:16.468 ' 00:12:16.468 19:32:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:16.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:16.468 --rc genhtml_branch_coverage=1 00:12:16.468 --rc genhtml_function_coverage=1 00:12:16.468 --rc genhtml_legend=1 00:12:16.468 --rc geninfo_all_blocks=1 00:12:16.468 --rc geninfo_unexecuted_blocks=1 00:12:16.468 00:12:16.468 ' 00:12:16.468 19:32:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:16.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:16.468 --rc genhtml_branch_coverage=1 00:12:16.468 --rc genhtml_function_coverage=1 00:12:16.468 --rc genhtml_legend=1 00:12:16.468 --rc geninfo_all_blocks=1 00:12:16.468 --rc geninfo_unexecuted_blocks=1 00:12:16.468 00:12:16.468 ' 00:12:16.468 19:32:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:16.468 19:32:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:12:16.468 19:32:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:16.468 19:32:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:16.468 19:32:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:16.468 19:32:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:16.468 19:32:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:16.468 19:32:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:16.468 19:32:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:16.468 19:32:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:16.468 19:32:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:16.468 19:32:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:16.468 19:32:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:12:16.468 19:32:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:12:16.468 19:32:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:16.468 19:32:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:16.468 19:32:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:16.468 19:32:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:16.468 19:32:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:16.468 19:32:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:12:16.468 19:32:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:16.468 19:32:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:16.468 19:32:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:16.468 19:32:09 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.468 19:32:09 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.468 19:32:09 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.468 19:32:09 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:12:16.468 19:32:09 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.468 19:32:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:12:16.468 19:32:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:16.468 19:32:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:16.468 19:32:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:16.468 19:32:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:16.468 19:32:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:16.468 19:32:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:16.468 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:16.468 19:32:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:16.469 19:32:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:16.469 19:32:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:16.469 19:32:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:12:16.469 19:32:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:12:16.469 19:32:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 1 -eq 0 ]] 00:12:16.469 19:32:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:12:16.469 19:32:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:16.469 19:32:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:16.469 19:32:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:16.469 ************************************ 00:12:16.469 START TEST nvmf_auth_target 00:12:16.469 ************************************ 00:12:16.469 19:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:12:16.469 * Looking for test storage... 00:12:16.469 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:16.469 19:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:16.469 19:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:16.469 19:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lcov --version 00:12:16.469 19:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:16.469 19:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:16.469 19:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:16.469 19:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:16.469 19:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:12:16.469 19:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:12:16.469 19:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:12:16.469 19:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:12:16.469 19:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:12:16.469 19:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:12:16.469 19:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:12:16.469 19:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:16.469 19:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:12:16.469 19:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:12:16.469 19:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:16.469 19:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:16.469 19:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:12:16.469 19:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:12:16.469 19:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:16.469 19:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:12:16.469 19:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:12:16.469 19:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:12:16.729 19:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:12:16.729 19:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:16.729 19:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:12:16.729 19:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:12:16.729 19:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:16.729 19:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:16.729 19:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:12:16.729 19:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:16.729 19:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:16.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:16.729 --rc genhtml_branch_coverage=1 00:12:16.729 --rc genhtml_function_coverage=1 00:12:16.729 --rc genhtml_legend=1 00:12:16.729 --rc geninfo_all_blocks=1 00:12:16.729 --rc geninfo_unexecuted_blocks=1 00:12:16.729 00:12:16.729 ' 00:12:16.729 19:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:16.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:16.729 --rc genhtml_branch_coverage=1 00:12:16.729 --rc genhtml_function_coverage=1 00:12:16.729 --rc genhtml_legend=1 00:12:16.729 --rc geninfo_all_blocks=1 00:12:16.729 --rc geninfo_unexecuted_blocks=1 00:12:16.729 00:12:16.729 ' 00:12:16.729 19:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:16.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:16.729 --rc genhtml_branch_coverage=1 00:12:16.729 --rc genhtml_function_coverage=1 00:12:16.729 --rc genhtml_legend=1 00:12:16.729 --rc geninfo_all_blocks=1 00:12:16.729 --rc geninfo_unexecuted_blocks=1 00:12:16.729 00:12:16.729 ' 00:12:16.729 19:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:16.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:16.729 --rc genhtml_branch_coverage=1 00:12:16.729 --rc genhtml_function_coverage=1 00:12:16.729 --rc genhtml_legend=1 00:12:16.729 --rc geninfo_all_blocks=1 00:12:16.729 --rc geninfo_unexecuted_blocks=1 00:12:16.729 00:12:16.729 ' 00:12:16.729 19:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:16.729 19:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:12:16.729 19:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:16.729 19:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:16.729 19:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:16.729 19:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:16.729 19:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:16.729 19:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:16.729 19:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:16.729 19:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:16.729 19:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:16.729 19:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:16.729 19:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:12:16.729 19:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:12:16.729 19:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:16.729 19:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:16.729 19:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:16.729 19:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:16.729 19:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:16.729 19:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:12:16.729 19:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:16.729 19:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:16.729 19:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:16.729 19:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.729 19:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.729 19:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.729 19:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:12:16.730 19:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.730 19:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:12:16.730 19:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:16.730 19:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:16.730 19:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:16.730 19:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:16.730 19:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:16.730 19:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:16.730 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:16.730 19:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:16.730 19:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:16.730 19:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:16.730 19:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:12:16.730 19:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:12:16.730 19:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:12:16.730 19:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:12:16.730 19:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:12:16.730 19:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:12:16.730 19:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:12:16.730 19:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:12:16.730 19:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:16.730 19:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:16.730 19:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:16.730 19:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:16.730 19:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:16.730 19:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:16.730 19:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:16.730 19:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:16.730 19:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:12:16.730 19:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:12:16.730 19:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:12:16.730 19:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:12:16.730 19:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:12:16.730 19:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:12:16.730 19:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:16.730 19:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:12:16.730 19:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:12:16.730 19:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:16.730 19:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:16.730 19:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:12:16.730 19:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:16.730 19:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:12:16.730 19:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:16.730 19:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:12:16.730 19:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:16.730 19:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:16.730 19:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:16.730 19:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:16.730 19:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:16.730 19:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:16.730 19:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:12:16.730 Cannot find device "nvmf_init_br" 00:12:16.730 19:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:12:16.730 19:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:12:16.730 Cannot find device "nvmf_init_br2" 00:12:16.730 19:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:12:16.730 19:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:12:16.730 Cannot find device "nvmf_tgt_br" 00:12:16.730 19:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # true 00:12:16.730 19:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:12:16.730 Cannot find device "nvmf_tgt_br2" 00:12:16.730 19:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # true 00:12:16.730 19:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:12:16.730 Cannot find device "nvmf_init_br" 00:12:16.730 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # true 00:12:16.730 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:12:16.730 Cannot find device "nvmf_init_br2" 00:12:16.730 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # true 00:12:16.730 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:12:16.730 Cannot find device "nvmf_tgt_br" 00:12:16.730 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # true 00:12:16.730 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:12:16.730 Cannot find device "nvmf_tgt_br2" 00:12:16.730 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # true 00:12:16.730 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:12:16.730 Cannot find device "nvmf_br" 00:12:16.730 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # true 00:12:16.730 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:12:16.730 Cannot find device "nvmf_init_if" 00:12:16.730 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # true 00:12:16.730 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:12:16.730 Cannot find device "nvmf_init_if2" 00:12:16.730 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # true 00:12:16.730 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:16.730 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:16.730 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # true 00:12:16.730 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:16.731 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:16.731 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # true 00:12:16.731 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:12:16.731 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:16.731 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:12:16.731 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:16.731 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:16.731 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:16.731 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:16.991 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:16.991 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:12:16.991 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:12:16.991 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:12:16.991 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:12:16.991 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:12:16.991 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:12:16.991 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:12:16.991 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:12:16.991 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:12:16.991 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:16.991 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:16.991 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:16.991 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:12:16.991 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:12:16.991 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:12:16.991 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:12:16.991 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:16.991 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:16.991 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:16.991 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:12:16.991 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:12:16.991 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:12:16.991 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:16.991 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:16.991 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:12:16.991 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:16.991 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.098 ms 00:12:16.991 00:12:16.991 --- 10.0.0.3 ping statistics --- 00:12:16.991 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:16.991 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:12:16.991 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:12:16.991 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:16.991 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.045 ms 00:12:16.991 00:12:16.991 --- 10.0.0.4 ping statistics --- 00:12:16.991 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:16.991 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:12:16.991 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:16.991 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:16.991 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:12:16.991 00:12:16.991 --- 10.0.0.1 ping statistics --- 00:12:16.991 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:16.991 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:12:16.991 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:12:16.991 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:16.991 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:12:16.991 00:12:16.991 --- 10.0.0.2 ping statistics --- 00:12:16.991 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:16.991 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:12:16.991 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:16.991 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@461 -- # return 0 00:12:16.991 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:16.991 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:16.991 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:16.991 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:16.991 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:16.991 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:16.991 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:16.991 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:12:16.991 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:16.991 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:16.991 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:16.991 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=67219 00:12:16.991 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:12:16.991 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 67219 00:12:16.991 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 67219 ']' 00:12:16.991 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:16.991 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:16.991 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:16.991 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:16.991 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.559 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:17.559 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:12:17.559 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:17.559 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:17.559 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.559 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:17.559 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=67238 00:12:17.559 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:12:17.559 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:12:17.559 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:12:17.559 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:12:17.559 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:17.559 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:12:17.559 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:12:17.559 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:12:17.559 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:12:17.559 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=d2a9b00422f4681c54e69d54bca670ddc01a35b9d9a408eb 00:12:17.559 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:12:17.559 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.Myl 00:12:17.559 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key d2a9b00422f4681c54e69d54bca670ddc01a35b9d9a408eb 0 00:12:17.560 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 d2a9b00422f4681c54e69d54bca670ddc01a35b9d9a408eb 0 00:12:17.560 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:12:17.560 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:12:17.560 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=d2a9b00422f4681c54e69d54bca670ddc01a35b9d9a408eb 00:12:17.560 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:12:17.560 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:12:17.560 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.Myl 00:12:17.560 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.Myl 00:12:17.560 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.Myl 00:12:17.560 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:12:17.560 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:12:17.560 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:17.560 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:12:17.560 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:12:17.560 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:12:17.560 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:12:17.560 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=771f2c40e1ae0216a68c49ec679e2027a65d6832cc5c74d1020143aaedf35c98 00:12:17.560 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:12:17.560 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.jaV 00:12:17.560 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 771f2c40e1ae0216a68c49ec679e2027a65d6832cc5c74d1020143aaedf35c98 3 00:12:17.560 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 771f2c40e1ae0216a68c49ec679e2027a65d6832cc5c74d1020143aaedf35c98 3 00:12:17.560 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:12:17.560 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:12:17.560 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=771f2c40e1ae0216a68c49ec679e2027a65d6832cc5c74d1020143aaedf35c98 00:12:17.560 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:12:17.560 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:12:17.560 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.jaV 00:12:17.560 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.jaV 00:12:17.560 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.jaV 00:12:17.560 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:12:17.560 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:12:17.560 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:17.560 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:12:17.560 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:12:17.560 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:12:17.560 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:12:17.560 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=cca3395bd95edc077c64670c0bc9e6de 00:12:17.560 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:12:17.560 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.HgJ 00:12:17.560 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key cca3395bd95edc077c64670c0bc9e6de 1 00:12:17.560 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 cca3395bd95edc077c64670c0bc9e6de 1 00:12:17.560 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:12:17.560 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:12:17.560 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=cca3395bd95edc077c64670c0bc9e6de 00:12:17.560 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:12:17.560 19:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:12:17.819 19:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.HgJ 00:12:17.819 19:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.HgJ 00:12:17.819 19:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.HgJ 00:12:17.819 19:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:12:17.819 19:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:12:17.819 19:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:17.819 19:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:12:17.819 19:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:12:17.819 19:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:12:17.819 19:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:12:17.819 19:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=74b67b8ffd45b8bc664d54189921ee581e3a7269ef57592f 00:12:17.819 19:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:12:17.819 19:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.eh2 00:12:17.819 19:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 74b67b8ffd45b8bc664d54189921ee581e3a7269ef57592f 2 00:12:17.819 19:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 74b67b8ffd45b8bc664d54189921ee581e3a7269ef57592f 2 00:12:17.819 19:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:12:17.819 19:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:12:17.819 19:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=74b67b8ffd45b8bc664d54189921ee581e3a7269ef57592f 00:12:17.819 19:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:12:17.819 19:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:12:17.819 19:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.eh2 00:12:17.819 19:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.eh2 00:12:17.819 19:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.eh2 00:12:17.819 19:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:12:17.819 19:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:12:17.819 19:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:17.819 19:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:12:17.819 19:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:12:17.819 19:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:12:17.819 19:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:12:17.820 19:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=ffae2956cb4df68c27f772f344c43065b4af4df8dbf467aa 00:12:17.820 19:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:12:17.820 19:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.GQo 00:12:17.820 19:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key ffae2956cb4df68c27f772f344c43065b4af4df8dbf467aa 2 00:12:17.820 19:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 ffae2956cb4df68c27f772f344c43065b4af4df8dbf467aa 2 00:12:17.820 19:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:12:17.820 19:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:12:17.820 19:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=ffae2956cb4df68c27f772f344c43065b4af4df8dbf467aa 00:12:17.820 19:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:12:17.820 19:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:12:17.820 19:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.GQo 00:12:17.820 19:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.GQo 00:12:17.820 19:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.GQo 00:12:17.820 19:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:12:17.820 19:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:12:17.820 19:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:17.820 19:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:12:17.820 19:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:12:17.820 19:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:12:17.820 19:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:12:17.820 19:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=4638b115c3d0bcff5b45436ac61f418a 00:12:17.820 19:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:12:17.820 19:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.3iZ 00:12:17.820 19:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 4638b115c3d0bcff5b45436ac61f418a 1 00:12:17.820 19:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 4638b115c3d0bcff5b45436ac61f418a 1 00:12:17.820 19:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:12:17.820 19:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:12:17.820 19:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=4638b115c3d0bcff5b45436ac61f418a 00:12:17.820 19:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:12:17.820 19:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:12:17.820 19:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.3iZ 00:12:17.820 19:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.3iZ 00:12:17.820 19:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.3iZ 00:12:17.820 19:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:12:17.820 19:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:12:17.820 19:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:17.820 19:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:12:17.820 19:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:12:17.820 19:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:12:17.820 19:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:12:17.820 19:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=3ad0b9bb987cd13fe6a3973f48577ba5c9d52ec73f2ce739bf48cb21765d975a 00:12:17.820 19:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:12:17.820 19:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.Gpx 00:12:17.820 19:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 3ad0b9bb987cd13fe6a3973f48577ba5c9d52ec73f2ce739bf48cb21765d975a 3 00:12:17.820 19:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 3ad0b9bb987cd13fe6a3973f48577ba5c9d52ec73f2ce739bf48cb21765d975a 3 00:12:17.820 19:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:12:17.820 19:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:12:17.820 19:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=3ad0b9bb987cd13fe6a3973f48577ba5c9d52ec73f2ce739bf48cb21765d975a 00:12:17.820 19:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:12:17.820 19:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:12:18.079 19:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.Gpx 00:12:18.079 19:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.Gpx 00:12:18.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:18.079 19:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.Gpx 00:12:18.079 19:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:12:18.079 19:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 67219 00:12:18.079 19:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 67219 ']' 00:12:18.079 19:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:18.079 19:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:18.079 19:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:18.079 19:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:18.079 19:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:18.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:12:18.338 19:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:18.338 19:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:12:18.338 19:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 67238 /var/tmp/host.sock 00:12:18.338 19:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 67238 ']' 00:12:18.338 19:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:12:18.338 19:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:18.338 19:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:12:18.338 19:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:18.338 19:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:18.598 19:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:18.598 19:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:12:18.598 19:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:12:18.598 19:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.598 19:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:18.598 19:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.598 19:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:12:18.598 19:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Myl 00:12:18.598 19:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.598 19:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:18.598 19:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.598 19:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.Myl 00:12:18.598 19:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.Myl 00:12:18.857 19:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.jaV ]] 00:12:18.858 19:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.jaV 00:12:18.858 19:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.858 19:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:18.858 19:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.858 19:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.jaV 00:12:18.858 19:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.jaV 00:12:19.117 19:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:12:19.117 19:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.HgJ 00:12:19.117 19:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.117 19:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.117 19:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.117 19:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.HgJ 00:12:19.117 19:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.HgJ 00:12:19.376 19:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.eh2 ]] 00:12:19.376 19:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.eh2 00:12:19.376 19:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.376 19:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.376 19:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.376 19:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.eh2 00:12:19.376 19:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.eh2 00:12:19.635 19:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:12:19.635 19:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.GQo 00:12:19.635 19:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.635 19:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.635 19:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.635 19:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.GQo 00:12:19.635 19:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.GQo 00:12:19.894 19:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.3iZ ]] 00:12:19.894 19:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.3iZ 00:12:19.894 19:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.894 19:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.894 19:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.894 19:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.3iZ 00:12:19.894 19:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.3iZ 00:12:20.153 19:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:12:20.153 19:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.Gpx 00:12:20.153 19:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.153 19:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.153 19:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.153 19:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.Gpx 00:12:20.153 19:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.Gpx 00:12:20.412 19:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:12:20.412 19:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:12:20.412 19:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:20.412 19:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:20.412 19:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:20.412 19:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:20.671 19:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:12:20.671 19:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:20.671 19:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:20.671 19:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:20.671 19:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:20.671 19:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:20.671 19:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:20.671 19:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.671 19:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.671 19:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.671 19:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:20.671 19:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:20.671 19:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:20.930 00:12:21.189 19:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:21.189 19:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:21.189 19:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:21.449 19:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:21.449 19:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:21.449 19:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.449 19:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.449 19:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.449 19:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:21.449 { 00:12:21.449 "cntlid": 1, 00:12:21.449 "qid": 0, 00:12:21.449 "state": "enabled", 00:12:21.449 "thread": "nvmf_tgt_poll_group_000", 00:12:21.449 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1", 00:12:21.449 "listen_address": { 00:12:21.449 "trtype": "TCP", 00:12:21.449 "adrfam": "IPv4", 00:12:21.449 "traddr": "10.0.0.3", 00:12:21.449 "trsvcid": "4420" 00:12:21.449 }, 00:12:21.449 "peer_address": { 00:12:21.449 "trtype": "TCP", 00:12:21.449 "adrfam": "IPv4", 00:12:21.449 "traddr": "10.0.0.1", 00:12:21.449 "trsvcid": "42386" 00:12:21.449 }, 00:12:21.449 "auth": { 00:12:21.449 "state": "completed", 00:12:21.449 "digest": "sha256", 00:12:21.449 "dhgroup": "null" 00:12:21.449 } 00:12:21.449 } 00:12:21.449 ]' 00:12:21.449 19:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:21.449 19:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:21.449 19:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:21.449 19:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:21.449 19:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:21.449 19:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:21.449 19:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:21.449 19:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:21.708 19:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDJhOWIwMDQyMmY0NjgxYzU0ZTY5ZDU0YmNhNjcwZGRjMDFhMzViOWQ5YTQwOGVieehMWQ==: --dhchap-ctrl-secret DHHC-1:03:NzcxZjJjNDBlMWFlMDIxNmE2OGM0OWVjNjc5ZTIwMjdhNjVkNjgzMmNjNWM3NGQxMDIwMTQzYWFlZGYzNWM5OMYeuPs=: 00:12:21.709 19:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --hostid bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -l 0 --dhchap-secret DHHC-1:00:ZDJhOWIwMDQyMmY0NjgxYzU0ZTY5ZDU0YmNhNjcwZGRjMDFhMzViOWQ5YTQwOGVieehMWQ==: --dhchap-ctrl-secret DHHC-1:03:NzcxZjJjNDBlMWFlMDIxNmE2OGM0OWVjNjc5ZTIwMjdhNjVkNjgzMmNjNWM3NGQxMDIwMTQzYWFlZGYzNWM5OMYeuPs=: 00:12:27.029 19:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:27.029 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:27.029 19:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:12:27.029 19:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.029 19:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.029 19:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.029 19:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:27.029 19:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:27.029 19:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:27.029 19:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:12:27.029 19:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:27.029 19:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:27.029 19:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:27.029 19:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:27.029 19:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:27.029 19:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:27.029 19:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.029 19:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.029 19:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.029 19:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:27.029 19:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:27.029 19:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:27.029 00:12:27.029 19:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:27.029 19:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:27.029 19:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:27.029 19:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:27.029 19:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:27.029 19:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.029 19:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.029 19:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.029 19:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:27.029 { 00:12:27.029 "cntlid": 3, 00:12:27.029 "qid": 0, 00:12:27.029 "state": "enabled", 00:12:27.029 "thread": "nvmf_tgt_poll_group_000", 00:12:27.029 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1", 00:12:27.029 "listen_address": { 00:12:27.029 "trtype": "TCP", 00:12:27.029 "adrfam": "IPv4", 00:12:27.029 "traddr": "10.0.0.3", 00:12:27.029 "trsvcid": "4420" 00:12:27.029 }, 00:12:27.029 "peer_address": { 00:12:27.029 "trtype": "TCP", 00:12:27.029 "adrfam": "IPv4", 00:12:27.029 "traddr": "10.0.0.1", 00:12:27.029 "trsvcid": "47106" 00:12:27.029 }, 00:12:27.029 "auth": { 00:12:27.029 "state": "completed", 00:12:27.029 "digest": "sha256", 00:12:27.029 "dhgroup": "null" 00:12:27.029 } 00:12:27.029 } 00:12:27.029 ]' 00:12:27.029 19:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:27.029 19:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:27.029 19:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:27.288 19:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:27.288 19:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:27.288 19:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:27.288 19:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:27.289 19:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:27.547 19:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2NhMzM5NWJkOTVlZGMwNzdjNjQ2NzBjMGJjOWU2ZGXFdSFf: --dhchap-ctrl-secret DHHC-1:02:NzRiNjdiOGZmZDQ1YjhiYzY2NGQ1NDE4OTkyMWVlNTgxZTNhNzI2OWVmNTc1OTJmPstlbA==: 00:12:27.547 19:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --hostid bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -l 0 --dhchap-secret DHHC-1:01:Y2NhMzM5NWJkOTVlZGMwNzdjNjQ2NzBjMGJjOWU2ZGXFdSFf: --dhchap-ctrl-secret DHHC-1:02:NzRiNjdiOGZmZDQ1YjhiYzY2NGQ1NDE4OTkyMWVlNTgxZTNhNzI2OWVmNTc1OTJmPstlbA==: 00:12:28.483 19:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:28.483 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:28.483 19:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:12:28.483 19:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.483 19:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.483 19:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.483 19:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:28.483 19:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:28.483 19:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:28.483 19:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:12:28.483 19:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:28.483 19:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:28.483 19:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:28.483 19:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:28.483 19:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:28.483 19:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:28.483 19:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.483 19:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.483 19:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.483 19:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:28.483 19:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:28.483 19:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:29.051 00:12:29.051 19:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:29.051 19:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:29.051 19:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:29.310 19:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:29.310 19:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:29.310 19:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.310 19:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.310 19:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.310 19:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:29.310 { 00:12:29.310 "cntlid": 5, 00:12:29.310 "qid": 0, 00:12:29.310 "state": "enabled", 00:12:29.310 "thread": "nvmf_tgt_poll_group_000", 00:12:29.310 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1", 00:12:29.310 "listen_address": { 00:12:29.310 "trtype": "TCP", 00:12:29.310 "adrfam": "IPv4", 00:12:29.310 "traddr": "10.0.0.3", 00:12:29.310 "trsvcid": "4420" 00:12:29.310 }, 00:12:29.310 "peer_address": { 00:12:29.310 "trtype": "TCP", 00:12:29.310 "adrfam": "IPv4", 00:12:29.310 "traddr": "10.0.0.1", 00:12:29.310 "trsvcid": "47130" 00:12:29.310 }, 00:12:29.310 "auth": { 00:12:29.310 "state": "completed", 00:12:29.310 "digest": "sha256", 00:12:29.310 "dhgroup": "null" 00:12:29.310 } 00:12:29.310 } 00:12:29.310 ]' 00:12:29.310 19:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:29.310 19:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:29.310 19:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:29.310 19:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:29.310 19:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:29.310 19:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:29.310 19:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:29.310 19:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:29.569 19:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmZhZTI5NTZjYjRkZjY4YzI3Zjc3MmYzNDRjNDMwNjViNGFmNGRmOGRiZjQ2N2FhSaZUaA==: --dhchap-ctrl-secret DHHC-1:01:NDYzOGIxMTVjM2QwYmNmZjViNDU0MzZhYzYxZjQxOGEW+wVW: 00:12:29.569 19:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --hostid bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -l 0 --dhchap-secret DHHC-1:02:ZmZhZTI5NTZjYjRkZjY4YzI3Zjc3MmYzNDRjNDMwNjViNGFmNGRmOGRiZjQ2N2FhSaZUaA==: --dhchap-ctrl-secret DHHC-1:01:NDYzOGIxMTVjM2QwYmNmZjViNDU0MzZhYzYxZjQxOGEW+wVW: 00:12:30.136 19:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:30.395 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:30.395 19:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:12:30.395 19:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.395 19:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.395 19:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.395 19:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:30.395 19:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:30.395 19:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:30.654 19:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:12:30.654 19:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:30.654 19:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:30.654 19:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:30.654 19:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:30.654 19:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:30.654 19:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --dhchap-key key3 00:12:30.654 19:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.654 19:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.654 19:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.654 19:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:30.654 19:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:30.654 19:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:30.912 00:12:30.912 19:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:30.912 19:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:30.912 19:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:31.169 19:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:31.169 19:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:31.169 19:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.169 19:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.169 19:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.169 19:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:31.169 { 00:12:31.169 "cntlid": 7, 00:12:31.169 "qid": 0, 00:12:31.169 "state": "enabled", 00:12:31.169 "thread": "nvmf_tgt_poll_group_000", 00:12:31.169 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1", 00:12:31.169 "listen_address": { 00:12:31.169 "trtype": "TCP", 00:12:31.169 "adrfam": "IPv4", 00:12:31.169 "traddr": "10.0.0.3", 00:12:31.169 "trsvcid": "4420" 00:12:31.169 }, 00:12:31.169 "peer_address": { 00:12:31.169 "trtype": "TCP", 00:12:31.169 "adrfam": "IPv4", 00:12:31.169 "traddr": "10.0.0.1", 00:12:31.169 "trsvcid": "47178" 00:12:31.169 }, 00:12:31.169 "auth": { 00:12:31.169 "state": "completed", 00:12:31.169 "digest": "sha256", 00:12:31.169 "dhgroup": "null" 00:12:31.170 } 00:12:31.170 } 00:12:31.170 ]' 00:12:31.170 19:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:31.170 19:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:31.170 19:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:31.170 19:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:31.427 19:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:31.427 19:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:31.427 19:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:31.427 19:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:31.683 19:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2FkMGI5YmI5ODdjZDEzZmU2YTM5NzNmNDg1NzdiYTVjOWQ1MmVjNzNmMmNlNzM5YmY0OGNiMjE3NjVkOTc1YXoxhJw=: 00:12:31.683 19:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --hostid bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -l 0 --dhchap-secret DHHC-1:03:M2FkMGI5YmI5ODdjZDEzZmU2YTM5NzNmNDg1NzdiYTVjOWQ1MmVjNzNmMmNlNzM5YmY0OGNiMjE3NjVkOTc1YXoxhJw=: 00:12:32.246 19:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:32.246 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:32.246 19:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:12:32.246 19:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.246 19:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.246 19:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.246 19:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:32.246 19:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:32.246 19:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:32.246 19:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:32.502 19:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:12:32.502 19:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:32.502 19:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:32.502 19:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:32.502 19:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:32.502 19:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:32.502 19:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:32.502 19:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.502 19:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.502 19:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.502 19:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:32.502 19:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:32.502 19:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:33.066 00:12:33.066 19:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:33.066 19:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:33.066 19:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:33.325 19:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:33.325 19:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:33.325 19:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.325 19:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.325 19:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.325 19:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:33.325 { 00:12:33.325 "cntlid": 9, 00:12:33.325 "qid": 0, 00:12:33.325 "state": "enabled", 00:12:33.325 "thread": "nvmf_tgt_poll_group_000", 00:12:33.325 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1", 00:12:33.325 "listen_address": { 00:12:33.325 "trtype": "TCP", 00:12:33.325 "adrfam": "IPv4", 00:12:33.325 "traddr": "10.0.0.3", 00:12:33.325 "trsvcid": "4420" 00:12:33.325 }, 00:12:33.325 "peer_address": { 00:12:33.325 "trtype": "TCP", 00:12:33.325 "adrfam": "IPv4", 00:12:33.325 "traddr": "10.0.0.1", 00:12:33.325 "trsvcid": "47216" 00:12:33.325 }, 00:12:33.325 "auth": { 00:12:33.325 "state": "completed", 00:12:33.325 "digest": "sha256", 00:12:33.325 "dhgroup": "ffdhe2048" 00:12:33.325 } 00:12:33.325 } 00:12:33.325 ]' 00:12:33.325 19:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:33.325 19:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:33.325 19:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:33.325 19:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:33.325 19:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:33.325 19:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:33.325 19:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:33.325 19:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:33.582 19:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDJhOWIwMDQyMmY0NjgxYzU0ZTY5ZDU0YmNhNjcwZGRjMDFhMzViOWQ5YTQwOGVieehMWQ==: --dhchap-ctrl-secret DHHC-1:03:NzcxZjJjNDBlMWFlMDIxNmE2OGM0OWVjNjc5ZTIwMjdhNjVkNjgzMmNjNWM3NGQxMDIwMTQzYWFlZGYzNWM5OMYeuPs=: 00:12:33.582 19:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --hostid bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -l 0 --dhchap-secret DHHC-1:00:ZDJhOWIwMDQyMmY0NjgxYzU0ZTY5ZDU0YmNhNjcwZGRjMDFhMzViOWQ5YTQwOGVieehMWQ==: --dhchap-ctrl-secret DHHC-1:03:NzcxZjJjNDBlMWFlMDIxNmE2OGM0OWVjNjc5ZTIwMjdhNjVkNjgzMmNjNWM3NGQxMDIwMTQzYWFlZGYzNWM5OMYeuPs=: 00:12:34.516 19:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:34.516 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:34.516 19:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:12:34.516 19:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.516 19:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.516 19:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.516 19:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:34.516 19:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:34.516 19:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:34.775 19:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:12:34.775 19:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:34.775 19:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:34.775 19:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:34.775 19:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:34.775 19:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:34.775 19:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:34.775 19:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.775 19:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.775 19:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.775 19:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:34.775 19:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:34.775 19:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:35.034 00:12:35.034 19:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:35.034 19:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:35.034 19:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:35.293 19:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:35.293 19:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:35.293 19:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.293 19:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.293 19:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.293 19:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:35.293 { 00:12:35.293 "cntlid": 11, 00:12:35.293 "qid": 0, 00:12:35.293 "state": "enabled", 00:12:35.293 "thread": "nvmf_tgt_poll_group_000", 00:12:35.293 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1", 00:12:35.293 "listen_address": { 00:12:35.293 "trtype": "TCP", 00:12:35.293 "adrfam": "IPv4", 00:12:35.293 "traddr": "10.0.0.3", 00:12:35.293 "trsvcid": "4420" 00:12:35.293 }, 00:12:35.293 "peer_address": { 00:12:35.293 "trtype": "TCP", 00:12:35.293 "adrfam": "IPv4", 00:12:35.293 "traddr": "10.0.0.1", 00:12:35.293 "trsvcid": "49922" 00:12:35.293 }, 00:12:35.294 "auth": { 00:12:35.294 "state": "completed", 00:12:35.294 "digest": "sha256", 00:12:35.294 "dhgroup": "ffdhe2048" 00:12:35.294 } 00:12:35.294 } 00:12:35.294 ]' 00:12:35.294 19:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:35.553 19:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:35.553 19:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:35.553 19:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:35.553 19:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:35.553 19:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:35.553 19:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:35.553 19:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:35.836 19:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2NhMzM5NWJkOTVlZGMwNzdjNjQ2NzBjMGJjOWU2ZGXFdSFf: --dhchap-ctrl-secret DHHC-1:02:NzRiNjdiOGZmZDQ1YjhiYzY2NGQ1NDE4OTkyMWVlNTgxZTNhNzI2OWVmNTc1OTJmPstlbA==: 00:12:35.836 19:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --hostid bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -l 0 --dhchap-secret DHHC-1:01:Y2NhMzM5NWJkOTVlZGMwNzdjNjQ2NzBjMGJjOWU2ZGXFdSFf: --dhchap-ctrl-secret DHHC-1:02:NzRiNjdiOGZmZDQ1YjhiYzY2NGQ1NDE4OTkyMWVlNTgxZTNhNzI2OWVmNTc1OTJmPstlbA==: 00:12:36.403 19:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:36.403 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:36.403 19:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:12:36.403 19:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.403 19:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.403 19:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.403 19:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:36.403 19:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:36.403 19:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:36.969 19:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:12:36.969 19:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:36.969 19:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:36.969 19:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:36.969 19:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:36.969 19:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:36.969 19:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:36.969 19:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.969 19:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.969 19:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.969 19:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:36.969 19:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:36.969 19:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:37.227 00:12:37.227 19:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:37.227 19:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:37.227 19:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:37.486 19:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:37.486 19:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:37.486 19:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.486 19:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:37.486 19:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.486 19:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:37.486 { 00:12:37.486 "cntlid": 13, 00:12:37.486 "qid": 0, 00:12:37.486 "state": "enabled", 00:12:37.486 "thread": "nvmf_tgt_poll_group_000", 00:12:37.486 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1", 00:12:37.486 "listen_address": { 00:12:37.486 "trtype": "TCP", 00:12:37.486 "adrfam": "IPv4", 00:12:37.486 "traddr": "10.0.0.3", 00:12:37.486 "trsvcid": "4420" 00:12:37.486 }, 00:12:37.486 "peer_address": { 00:12:37.486 "trtype": "TCP", 00:12:37.486 "adrfam": "IPv4", 00:12:37.486 "traddr": "10.0.0.1", 00:12:37.486 "trsvcid": "49960" 00:12:37.486 }, 00:12:37.486 "auth": { 00:12:37.486 "state": "completed", 00:12:37.486 "digest": "sha256", 00:12:37.486 "dhgroup": "ffdhe2048" 00:12:37.486 } 00:12:37.486 } 00:12:37.486 ]' 00:12:37.486 19:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:37.486 19:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:37.486 19:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:37.486 19:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:37.486 19:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:37.746 19:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:37.746 19:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:37.746 19:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:38.005 19:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmZhZTI5NTZjYjRkZjY4YzI3Zjc3MmYzNDRjNDMwNjViNGFmNGRmOGRiZjQ2N2FhSaZUaA==: --dhchap-ctrl-secret DHHC-1:01:NDYzOGIxMTVjM2QwYmNmZjViNDU0MzZhYzYxZjQxOGEW+wVW: 00:12:38.005 19:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --hostid bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -l 0 --dhchap-secret DHHC-1:02:ZmZhZTI5NTZjYjRkZjY4YzI3Zjc3MmYzNDRjNDMwNjViNGFmNGRmOGRiZjQ2N2FhSaZUaA==: --dhchap-ctrl-secret DHHC-1:01:NDYzOGIxMTVjM2QwYmNmZjViNDU0MzZhYzYxZjQxOGEW+wVW: 00:12:38.610 19:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:38.610 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:38.610 19:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:12:38.610 19:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.610 19:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.610 19:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.610 19:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:38.610 19:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:38.610 19:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:38.869 19:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:12:38.869 19:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:38.869 19:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:38.869 19:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:38.869 19:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:38.869 19:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:38.869 19:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --dhchap-key key3 00:12:38.869 19:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.869 19:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.869 19:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.869 19:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:38.869 19:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:38.869 19:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:39.128 00:12:39.128 19:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:39.128 19:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:39.128 19:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:39.387 19:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:39.387 19:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:39.387 19:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.387 19:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.645 19:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.645 19:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:39.645 { 00:12:39.645 "cntlid": 15, 00:12:39.645 "qid": 0, 00:12:39.645 "state": "enabled", 00:12:39.645 "thread": "nvmf_tgt_poll_group_000", 00:12:39.645 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1", 00:12:39.645 "listen_address": { 00:12:39.645 "trtype": "TCP", 00:12:39.645 "adrfam": "IPv4", 00:12:39.645 "traddr": "10.0.0.3", 00:12:39.645 "trsvcid": "4420" 00:12:39.645 }, 00:12:39.645 "peer_address": { 00:12:39.645 "trtype": "TCP", 00:12:39.645 "adrfam": "IPv4", 00:12:39.645 "traddr": "10.0.0.1", 00:12:39.645 "trsvcid": "49984" 00:12:39.645 }, 00:12:39.645 "auth": { 00:12:39.645 "state": "completed", 00:12:39.645 "digest": "sha256", 00:12:39.645 "dhgroup": "ffdhe2048" 00:12:39.645 } 00:12:39.645 } 00:12:39.645 ]' 00:12:39.645 19:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:39.645 19:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:39.645 19:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:39.645 19:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:39.645 19:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:39.645 19:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:39.645 19:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:39.645 19:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:39.904 19:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2FkMGI5YmI5ODdjZDEzZmU2YTM5NzNmNDg1NzdiYTVjOWQ1MmVjNzNmMmNlNzM5YmY0OGNiMjE3NjVkOTc1YXoxhJw=: 00:12:39.904 19:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --hostid bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -l 0 --dhchap-secret DHHC-1:03:M2FkMGI5YmI5ODdjZDEzZmU2YTM5NzNmNDg1NzdiYTVjOWQ1MmVjNzNmMmNlNzM5YmY0OGNiMjE3NjVkOTc1YXoxhJw=: 00:12:40.470 19:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:40.470 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:40.471 19:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:12:40.471 19:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.471 19:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.471 19:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.471 19:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:40.471 19:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:40.471 19:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:40.471 19:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:40.729 19:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:12:40.729 19:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:40.729 19:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:40.729 19:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:40.729 19:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:40.729 19:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:40.729 19:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:40.729 19:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.729 19:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.988 19:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.988 19:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:40.988 19:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:40.988 19:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:41.246 00:12:41.246 19:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:41.246 19:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:41.246 19:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:41.519 19:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:41.519 19:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:41.519 19:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.519 19:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.519 19:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.519 19:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:41.519 { 00:12:41.519 "cntlid": 17, 00:12:41.519 "qid": 0, 00:12:41.519 "state": "enabled", 00:12:41.519 "thread": "nvmf_tgt_poll_group_000", 00:12:41.519 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1", 00:12:41.519 "listen_address": { 00:12:41.519 "trtype": "TCP", 00:12:41.519 "adrfam": "IPv4", 00:12:41.519 "traddr": "10.0.0.3", 00:12:41.519 "trsvcid": "4420" 00:12:41.519 }, 00:12:41.519 "peer_address": { 00:12:41.519 "trtype": "TCP", 00:12:41.519 "adrfam": "IPv4", 00:12:41.519 "traddr": "10.0.0.1", 00:12:41.519 "trsvcid": "50014" 00:12:41.519 }, 00:12:41.519 "auth": { 00:12:41.519 "state": "completed", 00:12:41.519 "digest": "sha256", 00:12:41.519 "dhgroup": "ffdhe3072" 00:12:41.519 } 00:12:41.519 } 00:12:41.519 ]' 00:12:41.519 19:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:41.519 19:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:41.519 19:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:41.519 19:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:41.519 19:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:41.777 19:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:41.777 19:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:41.777 19:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:42.035 19:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDJhOWIwMDQyMmY0NjgxYzU0ZTY5ZDU0YmNhNjcwZGRjMDFhMzViOWQ5YTQwOGVieehMWQ==: --dhchap-ctrl-secret DHHC-1:03:NzcxZjJjNDBlMWFlMDIxNmE2OGM0OWVjNjc5ZTIwMjdhNjVkNjgzMmNjNWM3NGQxMDIwMTQzYWFlZGYzNWM5OMYeuPs=: 00:12:42.035 19:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --hostid bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -l 0 --dhchap-secret DHHC-1:00:ZDJhOWIwMDQyMmY0NjgxYzU0ZTY5ZDU0YmNhNjcwZGRjMDFhMzViOWQ5YTQwOGVieehMWQ==: --dhchap-ctrl-secret DHHC-1:03:NzcxZjJjNDBlMWFlMDIxNmE2OGM0OWVjNjc5ZTIwMjdhNjVkNjgzMmNjNWM3NGQxMDIwMTQzYWFlZGYzNWM5OMYeuPs=: 00:12:42.601 19:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:42.601 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:42.601 19:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:12:42.601 19:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.601 19:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.601 19:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.601 19:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:42.601 19:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:42.601 19:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:42.859 19:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:12:42.859 19:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:42.859 19:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:42.859 19:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:42.859 19:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:42.859 19:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:42.859 19:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:42.859 19:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.859 19:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.859 19:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.860 19:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:42.860 19:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:42.860 19:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:43.425 00:12:43.425 19:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:43.425 19:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:43.425 19:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:43.684 19:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:43.684 19:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:43.684 19:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.684 19:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:43.684 19:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.684 19:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:43.684 { 00:12:43.684 "cntlid": 19, 00:12:43.684 "qid": 0, 00:12:43.684 "state": "enabled", 00:12:43.684 "thread": "nvmf_tgt_poll_group_000", 00:12:43.684 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1", 00:12:43.684 "listen_address": { 00:12:43.684 "trtype": "TCP", 00:12:43.684 "adrfam": "IPv4", 00:12:43.684 "traddr": "10.0.0.3", 00:12:43.684 "trsvcid": "4420" 00:12:43.684 }, 00:12:43.684 "peer_address": { 00:12:43.684 "trtype": "TCP", 00:12:43.684 "adrfam": "IPv4", 00:12:43.684 "traddr": "10.0.0.1", 00:12:43.684 "trsvcid": "50040" 00:12:43.684 }, 00:12:43.684 "auth": { 00:12:43.684 "state": "completed", 00:12:43.684 "digest": "sha256", 00:12:43.684 "dhgroup": "ffdhe3072" 00:12:43.684 } 00:12:43.684 } 00:12:43.684 ]' 00:12:43.684 19:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:43.684 19:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:43.684 19:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:43.684 19:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:43.684 19:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:43.943 19:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:43.943 19:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:43.943 19:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:44.201 19:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2NhMzM5NWJkOTVlZGMwNzdjNjQ2NzBjMGJjOWU2ZGXFdSFf: --dhchap-ctrl-secret DHHC-1:02:NzRiNjdiOGZmZDQ1YjhiYzY2NGQ1NDE4OTkyMWVlNTgxZTNhNzI2OWVmNTc1OTJmPstlbA==: 00:12:44.201 19:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --hostid bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -l 0 --dhchap-secret DHHC-1:01:Y2NhMzM5NWJkOTVlZGMwNzdjNjQ2NzBjMGJjOWU2ZGXFdSFf: --dhchap-ctrl-secret DHHC-1:02:NzRiNjdiOGZmZDQ1YjhiYzY2NGQ1NDE4OTkyMWVlNTgxZTNhNzI2OWVmNTc1OTJmPstlbA==: 00:12:44.768 19:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:44.768 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:44.768 19:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:12:44.768 19:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.768 19:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.768 19:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.768 19:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:44.768 19:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:44.768 19:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:45.027 19:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:12:45.027 19:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:45.027 19:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:45.027 19:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:45.027 19:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:45.027 19:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:45.027 19:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:45.027 19:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.027 19:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:45.027 19:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.027 19:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:45.027 19:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:45.027 19:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:45.285 00:12:45.285 19:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:45.285 19:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:45.285 19:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:45.852 19:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:45.852 19:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:45.852 19:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.852 19:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:45.852 19:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.852 19:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:45.852 { 00:12:45.852 "cntlid": 21, 00:12:45.852 "qid": 0, 00:12:45.852 "state": "enabled", 00:12:45.852 "thread": "nvmf_tgt_poll_group_000", 00:12:45.852 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1", 00:12:45.852 "listen_address": { 00:12:45.852 "trtype": "TCP", 00:12:45.852 "adrfam": "IPv4", 00:12:45.852 "traddr": "10.0.0.3", 00:12:45.852 "trsvcid": "4420" 00:12:45.852 }, 00:12:45.852 "peer_address": { 00:12:45.852 "trtype": "TCP", 00:12:45.852 "adrfam": "IPv4", 00:12:45.852 "traddr": "10.0.0.1", 00:12:45.852 "trsvcid": "58964" 00:12:45.852 }, 00:12:45.852 "auth": { 00:12:45.852 "state": "completed", 00:12:45.852 "digest": "sha256", 00:12:45.852 "dhgroup": "ffdhe3072" 00:12:45.852 } 00:12:45.852 } 00:12:45.852 ]' 00:12:45.852 19:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:45.852 19:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:45.852 19:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:45.852 19:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:45.852 19:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:45.852 19:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:45.852 19:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:45.852 19:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:46.112 19:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmZhZTI5NTZjYjRkZjY4YzI3Zjc3MmYzNDRjNDMwNjViNGFmNGRmOGRiZjQ2N2FhSaZUaA==: --dhchap-ctrl-secret DHHC-1:01:NDYzOGIxMTVjM2QwYmNmZjViNDU0MzZhYzYxZjQxOGEW+wVW: 00:12:46.112 19:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --hostid bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -l 0 --dhchap-secret DHHC-1:02:ZmZhZTI5NTZjYjRkZjY4YzI3Zjc3MmYzNDRjNDMwNjViNGFmNGRmOGRiZjQ2N2FhSaZUaA==: --dhchap-ctrl-secret DHHC-1:01:NDYzOGIxMTVjM2QwYmNmZjViNDU0MzZhYzYxZjQxOGEW+wVW: 00:12:47.046 19:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:47.046 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:47.046 19:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:12:47.046 19:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.046 19:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.046 19:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.046 19:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:47.046 19:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:47.047 19:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:47.047 19:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:12:47.047 19:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:47.047 19:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:47.047 19:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:47.047 19:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:47.047 19:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:47.047 19:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --dhchap-key key3 00:12:47.047 19:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.047 19:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.047 19:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.047 19:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:47.047 19:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:47.047 19:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:47.613 00:12:47.613 19:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:47.613 19:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:47.613 19:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:47.872 19:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:47.872 19:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:47.872 19:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.872 19:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.872 19:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.872 19:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:47.872 { 00:12:47.872 "cntlid": 23, 00:12:47.872 "qid": 0, 00:12:47.872 "state": "enabled", 00:12:47.872 "thread": "nvmf_tgt_poll_group_000", 00:12:47.872 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1", 00:12:47.872 "listen_address": { 00:12:47.872 "trtype": "TCP", 00:12:47.872 "adrfam": "IPv4", 00:12:47.872 "traddr": "10.0.0.3", 00:12:47.872 "trsvcid": "4420" 00:12:47.872 }, 00:12:47.872 "peer_address": { 00:12:47.872 "trtype": "TCP", 00:12:47.872 "adrfam": "IPv4", 00:12:47.872 "traddr": "10.0.0.1", 00:12:47.872 "trsvcid": "59002" 00:12:47.872 }, 00:12:47.872 "auth": { 00:12:47.872 "state": "completed", 00:12:47.872 "digest": "sha256", 00:12:47.872 "dhgroup": "ffdhe3072" 00:12:47.872 } 00:12:47.872 } 00:12:47.872 ]' 00:12:47.872 19:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:47.872 19:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:47.872 19:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:47.872 19:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:47.872 19:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:47.872 19:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:47.872 19:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:47.872 19:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:48.131 19:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2FkMGI5YmI5ODdjZDEzZmU2YTM5NzNmNDg1NzdiYTVjOWQ1MmVjNzNmMmNlNzM5YmY0OGNiMjE3NjVkOTc1YXoxhJw=: 00:12:48.131 19:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --hostid bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -l 0 --dhchap-secret DHHC-1:03:M2FkMGI5YmI5ODdjZDEzZmU2YTM5NzNmNDg1NzdiYTVjOWQ1MmVjNzNmMmNlNzM5YmY0OGNiMjE3NjVkOTc1YXoxhJw=: 00:12:49.119 19:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:49.119 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:49.119 19:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:12:49.119 19:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.119 19:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:49.119 19:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.119 19:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:49.119 19:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:49.119 19:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:49.119 19:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:49.120 19:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:12:49.120 19:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:49.120 19:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:49.120 19:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:49.120 19:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:49.120 19:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:49.120 19:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:49.120 19:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.120 19:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:49.120 19:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.120 19:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:49.120 19:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:49.120 19:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:49.691 00:12:49.691 19:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:49.691 19:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:49.691 19:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:49.691 19:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:49.691 19:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:49.691 19:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.691 19:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:49.949 19:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.949 19:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:49.949 { 00:12:49.949 "cntlid": 25, 00:12:49.949 "qid": 0, 00:12:49.949 "state": "enabled", 00:12:49.949 "thread": "nvmf_tgt_poll_group_000", 00:12:49.949 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1", 00:12:49.949 "listen_address": { 00:12:49.949 "trtype": "TCP", 00:12:49.949 "adrfam": "IPv4", 00:12:49.949 "traddr": "10.0.0.3", 00:12:49.949 "trsvcid": "4420" 00:12:49.949 }, 00:12:49.949 "peer_address": { 00:12:49.949 "trtype": "TCP", 00:12:49.949 "adrfam": "IPv4", 00:12:49.949 "traddr": "10.0.0.1", 00:12:49.949 "trsvcid": "59024" 00:12:49.949 }, 00:12:49.949 "auth": { 00:12:49.949 "state": "completed", 00:12:49.949 "digest": "sha256", 00:12:49.949 "dhgroup": "ffdhe4096" 00:12:49.949 } 00:12:49.949 } 00:12:49.949 ]' 00:12:49.949 19:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:49.949 19:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:49.949 19:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:49.949 19:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:49.949 19:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:49.949 19:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:49.949 19:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:49.949 19:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:50.207 19:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDJhOWIwMDQyMmY0NjgxYzU0ZTY5ZDU0YmNhNjcwZGRjMDFhMzViOWQ5YTQwOGVieehMWQ==: --dhchap-ctrl-secret DHHC-1:03:NzcxZjJjNDBlMWFlMDIxNmE2OGM0OWVjNjc5ZTIwMjdhNjVkNjgzMmNjNWM3NGQxMDIwMTQzYWFlZGYzNWM5OMYeuPs=: 00:12:50.207 19:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --hostid bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -l 0 --dhchap-secret DHHC-1:00:ZDJhOWIwMDQyMmY0NjgxYzU0ZTY5ZDU0YmNhNjcwZGRjMDFhMzViOWQ5YTQwOGVieehMWQ==: --dhchap-ctrl-secret DHHC-1:03:NzcxZjJjNDBlMWFlMDIxNmE2OGM0OWVjNjc5ZTIwMjdhNjVkNjgzMmNjNWM3NGQxMDIwMTQzYWFlZGYzNWM5OMYeuPs=: 00:12:51.141 19:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:51.141 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:51.141 19:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:12:51.141 19:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.141 19:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.141 19:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.141 19:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:51.141 19:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:51.141 19:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:51.399 19:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:12:51.399 19:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:51.399 19:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:51.399 19:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:51.399 19:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:51.399 19:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:51.399 19:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:51.399 19:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.399 19:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.399 19:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.399 19:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:51.399 19:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:51.399 19:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:51.656 00:12:51.656 19:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:51.656 19:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:51.656 19:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:51.913 19:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:51.913 19:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:51.913 19:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.913 19:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.913 19:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.913 19:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:51.913 { 00:12:51.913 "cntlid": 27, 00:12:51.913 "qid": 0, 00:12:51.913 "state": "enabled", 00:12:51.913 "thread": "nvmf_tgt_poll_group_000", 00:12:51.913 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1", 00:12:51.913 "listen_address": { 00:12:51.913 "trtype": "TCP", 00:12:51.913 "adrfam": "IPv4", 00:12:51.913 "traddr": "10.0.0.3", 00:12:51.913 "trsvcid": "4420" 00:12:51.913 }, 00:12:51.913 "peer_address": { 00:12:51.913 "trtype": "TCP", 00:12:51.913 "adrfam": "IPv4", 00:12:51.913 "traddr": "10.0.0.1", 00:12:51.913 "trsvcid": "59042" 00:12:51.913 }, 00:12:51.913 "auth": { 00:12:51.913 "state": "completed", 00:12:51.913 "digest": "sha256", 00:12:51.913 "dhgroup": "ffdhe4096" 00:12:51.913 } 00:12:51.913 } 00:12:51.913 ]' 00:12:51.913 19:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:52.169 19:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:52.169 19:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:52.169 19:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:52.169 19:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:52.169 19:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:52.169 19:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:52.169 19:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:52.426 19:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2NhMzM5NWJkOTVlZGMwNzdjNjQ2NzBjMGJjOWU2ZGXFdSFf: --dhchap-ctrl-secret DHHC-1:02:NzRiNjdiOGZmZDQ1YjhiYzY2NGQ1NDE4OTkyMWVlNTgxZTNhNzI2OWVmNTc1OTJmPstlbA==: 00:12:52.426 19:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --hostid bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -l 0 --dhchap-secret DHHC-1:01:Y2NhMzM5NWJkOTVlZGMwNzdjNjQ2NzBjMGJjOWU2ZGXFdSFf: --dhchap-ctrl-secret DHHC-1:02:NzRiNjdiOGZmZDQ1YjhiYzY2NGQ1NDE4OTkyMWVlNTgxZTNhNzI2OWVmNTc1OTJmPstlbA==: 00:12:53.358 19:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:53.358 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:53.358 19:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:12:53.358 19:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.358 19:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:53.358 19:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.358 19:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:53.358 19:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:53.358 19:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:53.358 19:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:12:53.358 19:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:53.358 19:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:53.358 19:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:53.358 19:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:53.358 19:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:53.358 19:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:53.358 19:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.358 19:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:53.358 19:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.358 19:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:53.358 19:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:53.358 19:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:53.924 00:12:53.924 19:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:53.924 19:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:53.924 19:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:54.184 19:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:54.184 19:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:54.184 19:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.184 19:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:54.184 19:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.184 19:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:54.184 { 00:12:54.184 "cntlid": 29, 00:12:54.184 "qid": 0, 00:12:54.184 "state": "enabled", 00:12:54.184 "thread": "nvmf_tgt_poll_group_000", 00:12:54.184 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1", 00:12:54.184 "listen_address": { 00:12:54.184 "trtype": "TCP", 00:12:54.184 "adrfam": "IPv4", 00:12:54.184 "traddr": "10.0.0.3", 00:12:54.184 "trsvcid": "4420" 00:12:54.184 }, 00:12:54.184 "peer_address": { 00:12:54.184 "trtype": "TCP", 00:12:54.184 "adrfam": "IPv4", 00:12:54.184 "traddr": "10.0.0.1", 00:12:54.184 "trsvcid": "59068" 00:12:54.184 }, 00:12:54.184 "auth": { 00:12:54.184 "state": "completed", 00:12:54.184 "digest": "sha256", 00:12:54.184 "dhgroup": "ffdhe4096" 00:12:54.184 } 00:12:54.184 } 00:12:54.184 ]' 00:12:54.184 19:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:54.184 19:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:54.184 19:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:54.184 19:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:54.184 19:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:54.453 19:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:54.453 19:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:54.453 19:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:54.712 19:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmZhZTI5NTZjYjRkZjY4YzI3Zjc3MmYzNDRjNDMwNjViNGFmNGRmOGRiZjQ2N2FhSaZUaA==: --dhchap-ctrl-secret DHHC-1:01:NDYzOGIxMTVjM2QwYmNmZjViNDU0MzZhYzYxZjQxOGEW+wVW: 00:12:54.712 19:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --hostid bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -l 0 --dhchap-secret DHHC-1:02:ZmZhZTI5NTZjYjRkZjY4YzI3Zjc3MmYzNDRjNDMwNjViNGFmNGRmOGRiZjQ2N2FhSaZUaA==: --dhchap-ctrl-secret DHHC-1:01:NDYzOGIxMTVjM2QwYmNmZjViNDU0MzZhYzYxZjQxOGEW+wVW: 00:12:55.281 19:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:55.281 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:55.281 19:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:12:55.281 19:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.281 19:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:55.281 19:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.281 19:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:55.281 19:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:55.281 19:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:55.540 19:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:12:55.540 19:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:55.540 19:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:55.540 19:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:55.540 19:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:55.540 19:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:55.540 19:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --dhchap-key key3 00:12:55.540 19:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.540 19:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:55.540 19:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.540 19:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:55.540 19:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:55.540 19:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:56.106 00:12:56.106 19:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:56.106 19:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:56.106 19:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:56.364 19:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:56.364 19:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:56.364 19:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.364 19:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:56.364 19:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.364 19:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:56.364 { 00:12:56.364 "cntlid": 31, 00:12:56.364 "qid": 0, 00:12:56.364 "state": "enabled", 00:12:56.364 "thread": "nvmf_tgt_poll_group_000", 00:12:56.364 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1", 00:12:56.364 "listen_address": { 00:12:56.364 "trtype": "TCP", 00:12:56.364 "adrfam": "IPv4", 00:12:56.364 "traddr": "10.0.0.3", 00:12:56.364 "trsvcid": "4420" 00:12:56.364 }, 00:12:56.364 "peer_address": { 00:12:56.364 "trtype": "TCP", 00:12:56.364 "adrfam": "IPv4", 00:12:56.364 "traddr": "10.0.0.1", 00:12:56.364 "trsvcid": "52526" 00:12:56.364 }, 00:12:56.364 "auth": { 00:12:56.364 "state": "completed", 00:12:56.364 "digest": "sha256", 00:12:56.364 "dhgroup": "ffdhe4096" 00:12:56.364 } 00:12:56.364 } 00:12:56.364 ]' 00:12:56.364 19:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:56.364 19:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:56.364 19:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:56.364 19:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:56.364 19:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:56.364 19:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:56.364 19:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:56.364 19:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:56.623 19:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2FkMGI5YmI5ODdjZDEzZmU2YTM5NzNmNDg1NzdiYTVjOWQ1MmVjNzNmMmNlNzM5YmY0OGNiMjE3NjVkOTc1YXoxhJw=: 00:12:56.623 19:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --hostid bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -l 0 --dhchap-secret DHHC-1:03:M2FkMGI5YmI5ODdjZDEzZmU2YTM5NzNmNDg1NzdiYTVjOWQ1MmVjNzNmMmNlNzM5YmY0OGNiMjE3NjVkOTc1YXoxhJw=: 00:12:57.558 19:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:57.558 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:57.558 19:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:12:57.558 19:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.558 19:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:57.558 19:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.558 19:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:57.558 19:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:57.558 19:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:57.558 19:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:57.558 19:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:12:57.558 19:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:57.558 19:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:57.558 19:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:57.558 19:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:57.558 19:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:57.558 19:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:57.558 19:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.558 19:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:57.558 19:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.558 19:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:57.558 19:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:57.558 19:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:58.124 00:12:58.124 19:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:58.124 19:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:58.124 19:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:58.382 19:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:58.382 19:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:58.382 19:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.382 19:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.382 19:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.382 19:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:58.382 { 00:12:58.382 "cntlid": 33, 00:12:58.382 "qid": 0, 00:12:58.382 "state": "enabled", 00:12:58.382 "thread": "nvmf_tgt_poll_group_000", 00:12:58.382 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1", 00:12:58.382 "listen_address": { 00:12:58.382 "trtype": "TCP", 00:12:58.382 "adrfam": "IPv4", 00:12:58.382 "traddr": "10.0.0.3", 00:12:58.382 "trsvcid": "4420" 00:12:58.382 }, 00:12:58.382 "peer_address": { 00:12:58.382 "trtype": "TCP", 00:12:58.382 "adrfam": "IPv4", 00:12:58.382 "traddr": "10.0.0.1", 00:12:58.382 "trsvcid": "52560" 00:12:58.382 }, 00:12:58.382 "auth": { 00:12:58.382 "state": "completed", 00:12:58.382 "digest": "sha256", 00:12:58.382 "dhgroup": "ffdhe6144" 00:12:58.382 } 00:12:58.382 } 00:12:58.382 ]' 00:12:58.382 19:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:58.382 19:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:58.382 19:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:58.382 19:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:58.382 19:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:58.640 19:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:58.640 19:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:58.640 19:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:58.898 19:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDJhOWIwMDQyMmY0NjgxYzU0ZTY5ZDU0YmNhNjcwZGRjMDFhMzViOWQ5YTQwOGVieehMWQ==: --dhchap-ctrl-secret DHHC-1:03:NzcxZjJjNDBlMWFlMDIxNmE2OGM0OWVjNjc5ZTIwMjdhNjVkNjgzMmNjNWM3NGQxMDIwMTQzYWFlZGYzNWM5OMYeuPs=: 00:12:58.898 19:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --hostid bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -l 0 --dhchap-secret DHHC-1:00:ZDJhOWIwMDQyMmY0NjgxYzU0ZTY5ZDU0YmNhNjcwZGRjMDFhMzViOWQ5YTQwOGVieehMWQ==: --dhchap-ctrl-secret DHHC-1:03:NzcxZjJjNDBlMWFlMDIxNmE2OGM0OWVjNjc5ZTIwMjdhNjVkNjgzMmNjNWM3NGQxMDIwMTQzYWFlZGYzNWM5OMYeuPs=: 00:12:59.465 19:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:59.465 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:59.465 19:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:12:59.465 19:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.465 19:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:59.465 19:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.465 19:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:59.465 19:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:59.465 19:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:59.724 19:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:12:59.724 19:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:59.724 19:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:59.724 19:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:59.724 19:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:59.724 19:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:59.724 19:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:59.724 19:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.724 19:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:59.724 19:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.724 19:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:59.724 19:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:59.724 19:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:00.290 00:13:00.290 19:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:00.290 19:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:00.290 19:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:00.548 19:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:00.548 19:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:00.548 19:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.548 19:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.548 19:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.548 19:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:00.548 { 00:13:00.548 "cntlid": 35, 00:13:00.548 "qid": 0, 00:13:00.548 "state": "enabled", 00:13:00.548 "thread": "nvmf_tgt_poll_group_000", 00:13:00.548 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1", 00:13:00.548 "listen_address": { 00:13:00.548 "trtype": "TCP", 00:13:00.548 "adrfam": "IPv4", 00:13:00.548 "traddr": "10.0.0.3", 00:13:00.548 "trsvcid": "4420" 00:13:00.548 }, 00:13:00.548 "peer_address": { 00:13:00.548 "trtype": "TCP", 00:13:00.548 "adrfam": "IPv4", 00:13:00.549 "traddr": "10.0.0.1", 00:13:00.549 "trsvcid": "52598" 00:13:00.549 }, 00:13:00.549 "auth": { 00:13:00.549 "state": "completed", 00:13:00.549 "digest": "sha256", 00:13:00.549 "dhgroup": "ffdhe6144" 00:13:00.549 } 00:13:00.549 } 00:13:00.549 ]' 00:13:00.549 19:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:00.549 19:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:00.549 19:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:00.808 19:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:00.808 19:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:00.808 19:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:00.808 19:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:00.808 19:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:01.069 19:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2NhMzM5NWJkOTVlZGMwNzdjNjQ2NzBjMGJjOWU2ZGXFdSFf: --dhchap-ctrl-secret DHHC-1:02:NzRiNjdiOGZmZDQ1YjhiYzY2NGQ1NDE4OTkyMWVlNTgxZTNhNzI2OWVmNTc1OTJmPstlbA==: 00:13:01.069 19:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --hostid bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -l 0 --dhchap-secret DHHC-1:01:Y2NhMzM5NWJkOTVlZGMwNzdjNjQ2NzBjMGJjOWU2ZGXFdSFf: --dhchap-ctrl-secret DHHC-1:02:NzRiNjdiOGZmZDQ1YjhiYzY2NGQ1NDE4OTkyMWVlNTgxZTNhNzI2OWVmNTc1OTJmPstlbA==: 00:13:01.636 19:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:01.636 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:01.636 19:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:13:01.636 19:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.636 19:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.636 19:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.636 19:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:01.636 19:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:01.636 19:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:01.894 19:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:13:01.894 19:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:01.894 19:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:01.894 19:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:01.894 19:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:01.894 19:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:01.894 19:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:01.894 19:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.894 19:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.894 19:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.894 19:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:01.894 19:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:01.894 19:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:02.461 00:13:02.461 19:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:02.461 19:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:02.461 19:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:02.720 19:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:02.720 19:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:02.720 19:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.720 19:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:02.720 19:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.720 19:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:02.720 { 00:13:02.720 "cntlid": 37, 00:13:02.720 "qid": 0, 00:13:02.720 "state": "enabled", 00:13:02.720 "thread": "nvmf_tgt_poll_group_000", 00:13:02.720 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1", 00:13:02.720 "listen_address": { 00:13:02.720 "trtype": "TCP", 00:13:02.720 "adrfam": "IPv4", 00:13:02.720 "traddr": "10.0.0.3", 00:13:02.720 "trsvcid": "4420" 00:13:02.720 }, 00:13:02.720 "peer_address": { 00:13:02.720 "trtype": "TCP", 00:13:02.720 "adrfam": "IPv4", 00:13:02.720 "traddr": "10.0.0.1", 00:13:02.720 "trsvcid": "52626" 00:13:02.720 }, 00:13:02.720 "auth": { 00:13:02.720 "state": "completed", 00:13:02.720 "digest": "sha256", 00:13:02.720 "dhgroup": "ffdhe6144" 00:13:02.720 } 00:13:02.720 } 00:13:02.720 ]' 00:13:02.720 19:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:02.720 19:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:02.720 19:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:02.720 19:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:02.720 19:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:02.979 19:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:02.979 19:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:02.979 19:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:03.238 19:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmZhZTI5NTZjYjRkZjY4YzI3Zjc3MmYzNDRjNDMwNjViNGFmNGRmOGRiZjQ2N2FhSaZUaA==: --dhchap-ctrl-secret DHHC-1:01:NDYzOGIxMTVjM2QwYmNmZjViNDU0MzZhYzYxZjQxOGEW+wVW: 00:13:03.238 19:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --hostid bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -l 0 --dhchap-secret DHHC-1:02:ZmZhZTI5NTZjYjRkZjY4YzI3Zjc3MmYzNDRjNDMwNjViNGFmNGRmOGRiZjQ2N2FhSaZUaA==: --dhchap-ctrl-secret DHHC-1:01:NDYzOGIxMTVjM2QwYmNmZjViNDU0MzZhYzYxZjQxOGEW+wVW: 00:13:03.806 19:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:03.806 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:03.806 19:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:13:03.807 19:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.807 19:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:03.807 19:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.807 19:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:03.807 19:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:03.807 19:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:04.065 19:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:13:04.065 19:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:04.065 19:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:04.065 19:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:04.065 19:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:04.065 19:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:04.065 19:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --dhchap-key key3 00:13:04.065 19:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.065 19:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:04.066 19:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.066 19:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:04.066 19:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:04.066 19:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:04.633 00:13:04.633 19:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:04.633 19:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:04.633 19:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:04.633 19:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:04.633 19:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:04.633 19:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.633 19:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:04.893 19:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.893 19:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:04.893 { 00:13:04.893 "cntlid": 39, 00:13:04.893 "qid": 0, 00:13:04.893 "state": "enabled", 00:13:04.893 "thread": "nvmf_tgt_poll_group_000", 00:13:04.893 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1", 00:13:04.893 "listen_address": { 00:13:04.893 "trtype": "TCP", 00:13:04.893 "adrfam": "IPv4", 00:13:04.893 "traddr": "10.0.0.3", 00:13:04.893 "trsvcid": "4420" 00:13:04.893 }, 00:13:04.893 "peer_address": { 00:13:04.893 "trtype": "TCP", 00:13:04.893 "adrfam": "IPv4", 00:13:04.893 "traddr": "10.0.0.1", 00:13:04.893 "trsvcid": "52656" 00:13:04.893 }, 00:13:04.893 "auth": { 00:13:04.893 "state": "completed", 00:13:04.893 "digest": "sha256", 00:13:04.893 "dhgroup": "ffdhe6144" 00:13:04.893 } 00:13:04.893 } 00:13:04.893 ]' 00:13:04.893 19:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:04.893 19:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:04.893 19:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:04.893 19:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:04.893 19:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:04.893 19:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:04.893 19:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:04.893 19:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:05.152 19:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2FkMGI5YmI5ODdjZDEzZmU2YTM5NzNmNDg1NzdiYTVjOWQ1MmVjNzNmMmNlNzM5YmY0OGNiMjE3NjVkOTc1YXoxhJw=: 00:13:05.152 19:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --hostid bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -l 0 --dhchap-secret DHHC-1:03:M2FkMGI5YmI5ODdjZDEzZmU2YTM5NzNmNDg1NzdiYTVjOWQ1MmVjNzNmMmNlNzM5YmY0OGNiMjE3NjVkOTc1YXoxhJw=: 00:13:06.086 19:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:06.086 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:06.086 19:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:13:06.086 19:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.086 19:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:06.086 19:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.086 19:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:06.086 19:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:06.086 19:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:06.086 19:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:06.086 19:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:13:06.086 19:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:06.086 19:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:06.086 19:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:06.086 19:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:06.086 19:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:06.086 19:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:06.086 19:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.086 19:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:06.086 19:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.086 19:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:06.086 19:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:06.086 19:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:06.693 00:13:06.951 19:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:06.951 19:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:06.951 19:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:07.210 19:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:07.210 19:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:07.210 19:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.210 19:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.210 19:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.210 19:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:07.210 { 00:13:07.210 "cntlid": 41, 00:13:07.210 "qid": 0, 00:13:07.210 "state": "enabled", 00:13:07.210 "thread": "nvmf_tgt_poll_group_000", 00:13:07.210 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1", 00:13:07.210 "listen_address": { 00:13:07.210 "trtype": "TCP", 00:13:07.210 "adrfam": "IPv4", 00:13:07.210 "traddr": "10.0.0.3", 00:13:07.210 "trsvcid": "4420" 00:13:07.210 }, 00:13:07.210 "peer_address": { 00:13:07.210 "trtype": "TCP", 00:13:07.210 "adrfam": "IPv4", 00:13:07.210 "traddr": "10.0.0.1", 00:13:07.210 "trsvcid": "34610" 00:13:07.210 }, 00:13:07.210 "auth": { 00:13:07.210 "state": "completed", 00:13:07.210 "digest": "sha256", 00:13:07.210 "dhgroup": "ffdhe8192" 00:13:07.210 } 00:13:07.210 } 00:13:07.210 ]' 00:13:07.210 19:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:07.210 19:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:07.210 19:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:07.210 19:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:07.210 19:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:07.210 19:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:07.210 19:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:07.210 19:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:07.469 19:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDJhOWIwMDQyMmY0NjgxYzU0ZTY5ZDU0YmNhNjcwZGRjMDFhMzViOWQ5YTQwOGVieehMWQ==: --dhchap-ctrl-secret DHHC-1:03:NzcxZjJjNDBlMWFlMDIxNmE2OGM0OWVjNjc5ZTIwMjdhNjVkNjgzMmNjNWM3NGQxMDIwMTQzYWFlZGYzNWM5OMYeuPs=: 00:13:07.469 19:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --hostid bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -l 0 --dhchap-secret DHHC-1:00:ZDJhOWIwMDQyMmY0NjgxYzU0ZTY5ZDU0YmNhNjcwZGRjMDFhMzViOWQ5YTQwOGVieehMWQ==: --dhchap-ctrl-secret DHHC-1:03:NzcxZjJjNDBlMWFlMDIxNmE2OGM0OWVjNjc5ZTIwMjdhNjVkNjgzMmNjNWM3NGQxMDIwMTQzYWFlZGYzNWM5OMYeuPs=: 00:13:08.404 19:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:08.404 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:08.404 19:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:13:08.404 19:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.404 19:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.404 19:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.404 19:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:08.404 19:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:08.404 19:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:08.404 19:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:13:08.404 19:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:08.404 19:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:08.404 19:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:08.404 19:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:08.405 19:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:08.405 19:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:08.405 19:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.405 19:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.405 19:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.405 19:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:08.405 19:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:08.405 19:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:09.342 00:13:09.342 19:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:09.342 19:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:09.342 19:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:09.342 19:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:09.342 19:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:09.342 19:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.342 19:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:09.342 19:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.342 19:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:09.342 { 00:13:09.342 "cntlid": 43, 00:13:09.342 "qid": 0, 00:13:09.342 "state": "enabled", 00:13:09.342 "thread": "nvmf_tgt_poll_group_000", 00:13:09.342 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1", 00:13:09.342 "listen_address": { 00:13:09.342 "trtype": "TCP", 00:13:09.342 "adrfam": "IPv4", 00:13:09.342 "traddr": "10.0.0.3", 00:13:09.342 "trsvcid": "4420" 00:13:09.342 }, 00:13:09.342 "peer_address": { 00:13:09.342 "trtype": "TCP", 00:13:09.342 "adrfam": "IPv4", 00:13:09.342 "traddr": "10.0.0.1", 00:13:09.342 "trsvcid": "34644" 00:13:09.342 }, 00:13:09.342 "auth": { 00:13:09.342 "state": "completed", 00:13:09.342 "digest": "sha256", 00:13:09.342 "dhgroup": "ffdhe8192" 00:13:09.342 } 00:13:09.342 } 00:13:09.342 ]' 00:13:09.342 19:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:09.601 19:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:09.601 19:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:09.601 19:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:09.601 19:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:09.601 19:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:09.601 19:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:09.601 19:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:09.860 19:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2NhMzM5NWJkOTVlZGMwNzdjNjQ2NzBjMGJjOWU2ZGXFdSFf: --dhchap-ctrl-secret DHHC-1:02:NzRiNjdiOGZmZDQ1YjhiYzY2NGQ1NDE4OTkyMWVlNTgxZTNhNzI2OWVmNTc1OTJmPstlbA==: 00:13:09.860 19:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --hostid bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -l 0 --dhchap-secret DHHC-1:01:Y2NhMzM5NWJkOTVlZGMwNzdjNjQ2NzBjMGJjOWU2ZGXFdSFf: --dhchap-ctrl-secret DHHC-1:02:NzRiNjdiOGZmZDQ1YjhiYzY2NGQ1NDE4OTkyMWVlNTgxZTNhNzI2OWVmNTc1OTJmPstlbA==: 00:13:10.427 19:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:10.427 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:10.427 19:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:13:10.427 19:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.427 19:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.427 19:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.427 19:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:10.427 19:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:10.427 19:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:10.686 19:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:13:10.686 19:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:10.686 19:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:10.686 19:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:10.686 19:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:10.686 19:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:10.686 19:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:10.686 19:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.686 19:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.686 19:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.686 19:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:10.686 19:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:10.686 19:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:11.620 00:13:11.620 19:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:11.620 19:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:11.620 19:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:11.620 19:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:11.620 19:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:11.620 19:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.620 19:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:11.620 19:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.620 19:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:11.620 { 00:13:11.620 "cntlid": 45, 00:13:11.620 "qid": 0, 00:13:11.620 "state": "enabled", 00:13:11.620 "thread": "nvmf_tgt_poll_group_000", 00:13:11.620 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1", 00:13:11.620 "listen_address": { 00:13:11.620 "trtype": "TCP", 00:13:11.620 "adrfam": "IPv4", 00:13:11.620 "traddr": "10.0.0.3", 00:13:11.620 "trsvcid": "4420" 00:13:11.621 }, 00:13:11.621 "peer_address": { 00:13:11.621 "trtype": "TCP", 00:13:11.621 "adrfam": "IPv4", 00:13:11.621 "traddr": "10.0.0.1", 00:13:11.621 "trsvcid": "34652" 00:13:11.621 }, 00:13:11.621 "auth": { 00:13:11.621 "state": "completed", 00:13:11.621 "digest": "sha256", 00:13:11.621 "dhgroup": "ffdhe8192" 00:13:11.621 } 00:13:11.621 } 00:13:11.621 ]' 00:13:11.621 19:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:11.621 19:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:11.621 19:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:12.079 19:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:12.079 19:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:12.079 19:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:12.079 19:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:12.079 19:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:12.079 19:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmZhZTI5NTZjYjRkZjY4YzI3Zjc3MmYzNDRjNDMwNjViNGFmNGRmOGRiZjQ2N2FhSaZUaA==: --dhchap-ctrl-secret DHHC-1:01:NDYzOGIxMTVjM2QwYmNmZjViNDU0MzZhYzYxZjQxOGEW+wVW: 00:13:12.079 19:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --hostid bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -l 0 --dhchap-secret DHHC-1:02:ZmZhZTI5NTZjYjRkZjY4YzI3Zjc3MmYzNDRjNDMwNjViNGFmNGRmOGRiZjQ2N2FhSaZUaA==: --dhchap-ctrl-secret DHHC-1:01:NDYzOGIxMTVjM2QwYmNmZjViNDU0MzZhYzYxZjQxOGEW+wVW: 00:13:12.658 19:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:12.658 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:12.658 19:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:13:12.658 19:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.658 19:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.658 19:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.658 19:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:12.658 19:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:12.658 19:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:12.917 19:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:13:12.918 19:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:12.918 19:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:12.918 19:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:12.918 19:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:12.918 19:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:12.918 19:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --dhchap-key key3 00:13:12.918 19:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.918 19:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.918 19:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.918 19:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:12.918 19:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:12.918 19:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:13.484 00:13:13.484 19:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:13.484 19:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:13.484 19:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:13.741 19:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:13.741 19:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:13.741 19:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.741 19:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:13.741 19:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.741 19:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:13.741 { 00:13:13.741 "cntlid": 47, 00:13:13.741 "qid": 0, 00:13:13.741 "state": "enabled", 00:13:13.741 "thread": "nvmf_tgt_poll_group_000", 00:13:13.741 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1", 00:13:13.741 "listen_address": { 00:13:13.741 "trtype": "TCP", 00:13:13.741 "adrfam": "IPv4", 00:13:13.741 "traddr": "10.0.0.3", 00:13:13.741 "trsvcid": "4420" 00:13:13.741 }, 00:13:13.741 "peer_address": { 00:13:13.741 "trtype": "TCP", 00:13:13.741 "adrfam": "IPv4", 00:13:13.741 "traddr": "10.0.0.1", 00:13:13.741 "trsvcid": "34674" 00:13:13.741 }, 00:13:13.741 "auth": { 00:13:13.741 "state": "completed", 00:13:13.741 "digest": "sha256", 00:13:13.741 "dhgroup": "ffdhe8192" 00:13:13.741 } 00:13:13.741 } 00:13:13.741 ]' 00:13:13.741 19:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:13.999 19:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:13.999 19:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:13.999 19:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:13.999 19:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:13.999 19:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:13.999 19:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:13.999 19:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:14.258 19:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2FkMGI5YmI5ODdjZDEzZmU2YTM5NzNmNDg1NzdiYTVjOWQ1MmVjNzNmMmNlNzM5YmY0OGNiMjE3NjVkOTc1YXoxhJw=: 00:13:14.258 19:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --hostid bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -l 0 --dhchap-secret DHHC-1:03:M2FkMGI5YmI5ODdjZDEzZmU2YTM5NzNmNDg1NzdiYTVjOWQ1MmVjNzNmMmNlNzM5YmY0OGNiMjE3NjVkOTc1YXoxhJw=: 00:13:14.825 19:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:14.825 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:14.825 19:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:13:14.825 19:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.826 19:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.826 19:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.826 19:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:13:14.826 19:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:14.826 19:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:14.826 19:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:14.826 19:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:15.084 19:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:13:15.085 19:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:15.085 19:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:15.085 19:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:15.085 19:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:15.085 19:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:15.085 19:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:15.085 19:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.085 19:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:15.085 19:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.085 19:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:15.085 19:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:15.085 19:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:15.344 00:13:15.344 19:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:15.344 19:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:15.344 19:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:15.602 19:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:15.602 19:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:15.602 19:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.602 19:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:15.602 19:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.602 19:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:15.602 { 00:13:15.602 "cntlid": 49, 00:13:15.602 "qid": 0, 00:13:15.602 "state": "enabled", 00:13:15.602 "thread": "nvmf_tgt_poll_group_000", 00:13:15.602 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1", 00:13:15.602 "listen_address": { 00:13:15.602 "trtype": "TCP", 00:13:15.602 "adrfam": "IPv4", 00:13:15.602 "traddr": "10.0.0.3", 00:13:15.602 "trsvcid": "4420" 00:13:15.602 }, 00:13:15.602 "peer_address": { 00:13:15.602 "trtype": "TCP", 00:13:15.602 "adrfam": "IPv4", 00:13:15.602 "traddr": "10.0.0.1", 00:13:15.602 "trsvcid": "35212" 00:13:15.602 }, 00:13:15.602 "auth": { 00:13:15.602 "state": "completed", 00:13:15.602 "digest": "sha384", 00:13:15.602 "dhgroup": "null" 00:13:15.602 } 00:13:15.602 } 00:13:15.602 ]' 00:13:15.860 19:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:15.860 19:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:15.860 19:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:15.860 19:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:15.860 19:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:15.860 19:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:15.860 19:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:15.860 19:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:16.119 19:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDJhOWIwMDQyMmY0NjgxYzU0ZTY5ZDU0YmNhNjcwZGRjMDFhMzViOWQ5YTQwOGVieehMWQ==: --dhchap-ctrl-secret DHHC-1:03:NzcxZjJjNDBlMWFlMDIxNmE2OGM0OWVjNjc5ZTIwMjdhNjVkNjgzMmNjNWM3NGQxMDIwMTQzYWFlZGYzNWM5OMYeuPs=: 00:13:16.119 19:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --hostid bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -l 0 --dhchap-secret DHHC-1:00:ZDJhOWIwMDQyMmY0NjgxYzU0ZTY5ZDU0YmNhNjcwZGRjMDFhMzViOWQ5YTQwOGVieehMWQ==: --dhchap-ctrl-secret DHHC-1:03:NzcxZjJjNDBlMWFlMDIxNmE2OGM0OWVjNjc5ZTIwMjdhNjVkNjgzMmNjNWM3NGQxMDIwMTQzYWFlZGYzNWM5OMYeuPs=: 00:13:16.686 19:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:16.686 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:16.686 19:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:13:16.686 19:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.686 19:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.686 19:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.686 19:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:16.686 19:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:16.686 19:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:16.946 19:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:13:16.946 19:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:16.946 19:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:16.946 19:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:16.946 19:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:16.946 19:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:16.946 19:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:16.946 19:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.946 19:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.946 19:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.946 19:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:16.946 19:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:16.946 19:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:17.206 00:13:17.206 19:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:17.206 19:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:17.206 19:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:17.775 19:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:17.775 19:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:17.775 19:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.775 19:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.775 19:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.775 19:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:17.775 { 00:13:17.775 "cntlid": 51, 00:13:17.775 "qid": 0, 00:13:17.775 "state": "enabled", 00:13:17.775 "thread": "nvmf_tgt_poll_group_000", 00:13:17.775 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1", 00:13:17.775 "listen_address": { 00:13:17.775 "trtype": "TCP", 00:13:17.775 "adrfam": "IPv4", 00:13:17.775 "traddr": "10.0.0.3", 00:13:17.775 "trsvcid": "4420" 00:13:17.775 }, 00:13:17.775 "peer_address": { 00:13:17.775 "trtype": "TCP", 00:13:17.775 "adrfam": "IPv4", 00:13:17.775 "traddr": "10.0.0.1", 00:13:17.775 "trsvcid": "35260" 00:13:17.775 }, 00:13:17.775 "auth": { 00:13:17.775 "state": "completed", 00:13:17.775 "digest": "sha384", 00:13:17.775 "dhgroup": "null" 00:13:17.775 } 00:13:17.775 } 00:13:17.775 ]' 00:13:17.775 19:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:17.775 19:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:17.775 19:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:17.775 19:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:17.775 19:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:17.775 19:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:17.775 19:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:17.775 19:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:18.034 19:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2NhMzM5NWJkOTVlZGMwNzdjNjQ2NzBjMGJjOWU2ZGXFdSFf: --dhchap-ctrl-secret DHHC-1:02:NzRiNjdiOGZmZDQ1YjhiYzY2NGQ1NDE4OTkyMWVlNTgxZTNhNzI2OWVmNTc1OTJmPstlbA==: 00:13:18.034 19:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --hostid bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -l 0 --dhchap-secret DHHC-1:01:Y2NhMzM5NWJkOTVlZGMwNzdjNjQ2NzBjMGJjOWU2ZGXFdSFf: --dhchap-ctrl-secret DHHC-1:02:NzRiNjdiOGZmZDQ1YjhiYzY2NGQ1NDE4OTkyMWVlNTgxZTNhNzI2OWVmNTc1OTJmPstlbA==: 00:13:18.602 19:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:18.602 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:18.602 19:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:13:18.602 19:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.602 19:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.602 19:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.602 19:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:18.602 19:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:18.602 19:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:19.170 19:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:13:19.170 19:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:19.170 19:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:19.170 19:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:19.170 19:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:19.170 19:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:19.171 19:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:19.171 19:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.171 19:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:19.171 19:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.171 19:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:19.171 19:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:19.171 19:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:19.429 00:13:19.429 19:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:19.429 19:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:19.429 19:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:19.688 19:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:19.688 19:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:19.688 19:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.688 19:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:19.688 19:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.688 19:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:19.688 { 00:13:19.688 "cntlid": 53, 00:13:19.688 "qid": 0, 00:13:19.688 "state": "enabled", 00:13:19.688 "thread": "nvmf_tgt_poll_group_000", 00:13:19.688 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1", 00:13:19.688 "listen_address": { 00:13:19.688 "trtype": "TCP", 00:13:19.688 "adrfam": "IPv4", 00:13:19.688 "traddr": "10.0.0.3", 00:13:19.688 "trsvcid": "4420" 00:13:19.688 }, 00:13:19.688 "peer_address": { 00:13:19.688 "trtype": "TCP", 00:13:19.688 "adrfam": "IPv4", 00:13:19.688 "traddr": "10.0.0.1", 00:13:19.688 "trsvcid": "35288" 00:13:19.688 }, 00:13:19.688 "auth": { 00:13:19.688 "state": "completed", 00:13:19.688 "digest": "sha384", 00:13:19.688 "dhgroup": "null" 00:13:19.688 } 00:13:19.688 } 00:13:19.688 ]' 00:13:19.688 19:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:19.688 19:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:19.688 19:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:19.688 19:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:19.688 19:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:19.688 19:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:19.688 19:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:19.688 19:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:20.257 19:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmZhZTI5NTZjYjRkZjY4YzI3Zjc3MmYzNDRjNDMwNjViNGFmNGRmOGRiZjQ2N2FhSaZUaA==: --dhchap-ctrl-secret DHHC-1:01:NDYzOGIxMTVjM2QwYmNmZjViNDU0MzZhYzYxZjQxOGEW+wVW: 00:13:20.257 19:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --hostid bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -l 0 --dhchap-secret DHHC-1:02:ZmZhZTI5NTZjYjRkZjY4YzI3Zjc3MmYzNDRjNDMwNjViNGFmNGRmOGRiZjQ2N2FhSaZUaA==: --dhchap-ctrl-secret DHHC-1:01:NDYzOGIxMTVjM2QwYmNmZjViNDU0MzZhYzYxZjQxOGEW+wVW: 00:13:20.825 19:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:20.825 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:20.826 19:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:13:20.826 19:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.826 19:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:20.826 19:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.826 19:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:20.826 19:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:20.826 19:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:21.119 19:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:13:21.119 19:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:21.119 19:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:21.119 19:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:21.119 19:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:21.119 19:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:21.119 19:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --dhchap-key key3 00:13:21.119 19:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.119 19:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:21.119 19:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.119 19:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:21.119 19:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:21.119 19:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:21.386 00:13:21.386 19:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:21.386 19:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:21.386 19:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:21.646 19:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:21.646 19:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:21.646 19:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.646 19:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:21.646 19:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.646 19:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:21.646 { 00:13:21.646 "cntlid": 55, 00:13:21.646 "qid": 0, 00:13:21.646 "state": "enabled", 00:13:21.646 "thread": "nvmf_tgt_poll_group_000", 00:13:21.646 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1", 00:13:21.646 "listen_address": { 00:13:21.646 "trtype": "TCP", 00:13:21.646 "adrfam": "IPv4", 00:13:21.646 "traddr": "10.0.0.3", 00:13:21.646 "trsvcid": "4420" 00:13:21.646 }, 00:13:21.646 "peer_address": { 00:13:21.646 "trtype": "TCP", 00:13:21.646 "adrfam": "IPv4", 00:13:21.646 "traddr": "10.0.0.1", 00:13:21.646 "trsvcid": "35300" 00:13:21.646 }, 00:13:21.646 "auth": { 00:13:21.646 "state": "completed", 00:13:21.646 "digest": "sha384", 00:13:21.646 "dhgroup": "null" 00:13:21.646 } 00:13:21.646 } 00:13:21.646 ]' 00:13:21.646 19:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:21.646 19:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:21.646 19:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:21.904 19:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:21.904 19:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:21.904 19:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:21.904 19:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:21.904 19:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:22.162 19:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2FkMGI5YmI5ODdjZDEzZmU2YTM5NzNmNDg1NzdiYTVjOWQ1MmVjNzNmMmNlNzM5YmY0OGNiMjE3NjVkOTc1YXoxhJw=: 00:13:22.162 19:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --hostid bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -l 0 --dhchap-secret DHHC-1:03:M2FkMGI5YmI5ODdjZDEzZmU2YTM5NzNmNDg1NzdiYTVjOWQ1MmVjNzNmMmNlNzM5YmY0OGNiMjE3NjVkOTc1YXoxhJw=: 00:13:22.729 19:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:22.729 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:22.729 19:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:13:22.729 19:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.729 19:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:22.729 19:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.729 19:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:22.729 19:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:22.729 19:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:22.729 19:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:22.988 19:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:13:22.988 19:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:22.988 19:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:22.988 19:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:22.988 19:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:22.988 19:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:22.988 19:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:22.988 19:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.988 19:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:22.988 19:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.988 19:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:22.988 19:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:22.988 19:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:23.556 00:13:23.556 19:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:23.556 19:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:23.556 19:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:23.815 19:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:23.815 19:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:23.815 19:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.815 19:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:23.815 19:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.815 19:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:23.815 { 00:13:23.815 "cntlid": 57, 00:13:23.815 "qid": 0, 00:13:23.815 "state": "enabled", 00:13:23.815 "thread": "nvmf_tgt_poll_group_000", 00:13:23.815 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1", 00:13:23.815 "listen_address": { 00:13:23.815 "trtype": "TCP", 00:13:23.815 "adrfam": "IPv4", 00:13:23.815 "traddr": "10.0.0.3", 00:13:23.815 "trsvcid": "4420" 00:13:23.815 }, 00:13:23.815 "peer_address": { 00:13:23.815 "trtype": "TCP", 00:13:23.815 "adrfam": "IPv4", 00:13:23.815 "traddr": "10.0.0.1", 00:13:23.815 "trsvcid": "35320" 00:13:23.815 }, 00:13:23.815 "auth": { 00:13:23.815 "state": "completed", 00:13:23.815 "digest": "sha384", 00:13:23.815 "dhgroup": "ffdhe2048" 00:13:23.815 } 00:13:23.815 } 00:13:23.815 ]' 00:13:23.815 19:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:23.815 19:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:23.815 19:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:23.815 19:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:23.815 19:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:23.815 19:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:23.816 19:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:23.816 19:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:24.074 19:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDJhOWIwMDQyMmY0NjgxYzU0ZTY5ZDU0YmNhNjcwZGRjMDFhMzViOWQ5YTQwOGVieehMWQ==: --dhchap-ctrl-secret DHHC-1:03:NzcxZjJjNDBlMWFlMDIxNmE2OGM0OWVjNjc5ZTIwMjdhNjVkNjgzMmNjNWM3NGQxMDIwMTQzYWFlZGYzNWM5OMYeuPs=: 00:13:24.074 19:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --hostid bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -l 0 --dhchap-secret DHHC-1:00:ZDJhOWIwMDQyMmY0NjgxYzU0ZTY5ZDU0YmNhNjcwZGRjMDFhMzViOWQ5YTQwOGVieehMWQ==: --dhchap-ctrl-secret DHHC-1:03:NzcxZjJjNDBlMWFlMDIxNmE2OGM0OWVjNjc5ZTIwMjdhNjVkNjgzMmNjNWM3NGQxMDIwMTQzYWFlZGYzNWM5OMYeuPs=: 00:13:25.011 19:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:25.011 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:25.011 19:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:13:25.011 19:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.011 19:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:25.011 19:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.011 19:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:25.012 19:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:25.012 19:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:25.012 19:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:13:25.012 19:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:25.012 19:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:25.012 19:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:25.012 19:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:25.012 19:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:25.012 19:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:25.012 19:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.012 19:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:25.012 19:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.012 19:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:25.012 19:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:25.012 19:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:25.580 00:13:25.580 19:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:25.580 19:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:25.580 19:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:25.839 19:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:25.839 19:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:25.839 19:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.839 19:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:25.839 19:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.839 19:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:25.839 { 00:13:25.839 "cntlid": 59, 00:13:25.839 "qid": 0, 00:13:25.839 "state": "enabled", 00:13:25.839 "thread": "nvmf_tgt_poll_group_000", 00:13:25.839 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1", 00:13:25.839 "listen_address": { 00:13:25.839 "trtype": "TCP", 00:13:25.839 "adrfam": "IPv4", 00:13:25.839 "traddr": "10.0.0.3", 00:13:25.839 "trsvcid": "4420" 00:13:25.839 }, 00:13:25.839 "peer_address": { 00:13:25.839 "trtype": "TCP", 00:13:25.839 "adrfam": "IPv4", 00:13:25.839 "traddr": "10.0.0.1", 00:13:25.839 "trsvcid": "59492" 00:13:25.839 }, 00:13:25.839 "auth": { 00:13:25.839 "state": "completed", 00:13:25.839 "digest": "sha384", 00:13:25.839 "dhgroup": "ffdhe2048" 00:13:25.839 } 00:13:25.839 } 00:13:25.839 ]' 00:13:25.839 19:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:25.839 19:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:25.839 19:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:25.839 19:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:25.839 19:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:25.839 19:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:25.839 19:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:25.839 19:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:26.098 19:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2NhMzM5NWJkOTVlZGMwNzdjNjQ2NzBjMGJjOWU2ZGXFdSFf: --dhchap-ctrl-secret DHHC-1:02:NzRiNjdiOGZmZDQ1YjhiYzY2NGQ1NDE4OTkyMWVlNTgxZTNhNzI2OWVmNTc1OTJmPstlbA==: 00:13:26.098 19:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --hostid bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -l 0 --dhchap-secret DHHC-1:01:Y2NhMzM5NWJkOTVlZGMwNzdjNjQ2NzBjMGJjOWU2ZGXFdSFf: --dhchap-ctrl-secret DHHC-1:02:NzRiNjdiOGZmZDQ1YjhiYzY2NGQ1NDE4OTkyMWVlNTgxZTNhNzI2OWVmNTc1OTJmPstlbA==: 00:13:27.032 19:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:27.032 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:27.032 19:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:13:27.032 19:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.032 19:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:27.032 19:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.032 19:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:27.032 19:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:27.032 19:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:27.290 19:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:13:27.290 19:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:27.290 19:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:27.290 19:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:27.290 19:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:27.290 19:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:27.290 19:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:27.290 19:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.290 19:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:27.290 19:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.290 19:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:27.290 19:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:27.290 19:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:27.548 00:13:27.548 19:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:27.548 19:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:27.548 19:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:27.807 19:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:27.807 19:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:27.807 19:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.807 19:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:27.807 19:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.807 19:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:27.807 { 00:13:27.807 "cntlid": 61, 00:13:27.807 "qid": 0, 00:13:27.807 "state": "enabled", 00:13:27.807 "thread": "nvmf_tgt_poll_group_000", 00:13:27.807 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1", 00:13:27.807 "listen_address": { 00:13:27.807 "trtype": "TCP", 00:13:27.807 "adrfam": "IPv4", 00:13:27.807 "traddr": "10.0.0.3", 00:13:27.807 "trsvcid": "4420" 00:13:27.807 }, 00:13:27.807 "peer_address": { 00:13:27.807 "trtype": "TCP", 00:13:27.807 "adrfam": "IPv4", 00:13:27.807 "traddr": "10.0.0.1", 00:13:27.807 "trsvcid": "59532" 00:13:27.807 }, 00:13:27.807 "auth": { 00:13:27.807 "state": "completed", 00:13:27.807 "digest": "sha384", 00:13:27.807 "dhgroup": "ffdhe2048" 00:13:27.807 } 00:13:27.807 } 00:13:27.807 ]' 00:13:27.807 19:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:28.066 19:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:28.066 19:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:28.066 19:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:28.066 19:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:28.066 19:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:28.066 19:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:28.066 19:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:28.325 19:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmZhZTI5NTZjYjRkZjY4YzI3Zjc3MmYzNDRjNDMwNjViNGFmNGRmOGRiZjQ2N2FhSaZUaA==: --dhchap-ctrl-secret DHHC-1:01:NDYzOGIxMTVjM2QwYmNmZjViNDU0MzZhYzYxZjQxOGEW+wVW: 00:13:28.325 19:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --hostid bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -l 0 --dhchap-secret DHHC-1:02:ZmZhZTI5NTZjYjRkZjY4YzI3Zjc3MmYzNDRjNDMwNjViNGFmNGRmOGRiZjQ2N2FhSaZUaA==: --dhchap-ctrl-secret DHHC-1:01:NDYzOGIxMTVjM2QwYmNmZjViNDU0MzZhYzYxZjQxOGEW+wVW: 00:13:28.892 19:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:28.892 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:28.892 19:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:13:28.892 19:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.892 19:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:28.892 19:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.892 19:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:28.892 19:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:28.892 19:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:29.151 19:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:13:29.151 19:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:29.151 19:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:29.151 19:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:29.151 19:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:29.151 19:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:29.151 19:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --dhchap-key key3 00:13:29.151 19:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.151 19:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:29.151 19:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.151 19:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:29.151 19:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:29.151 19:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:29.410 00:13:29.410 19:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:29.411 19:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:29.411 19:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:29.669 19:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:29.669 19:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:29.669 19:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.670 19:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:29.670 19:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.670 19:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:29.670 { 00:13:29.670 "cntlid": 63, 00:13:29.670 "qid": 0, 00:13:29.670 "state": "enabled", 00:13:29.670 "thread": "nvmf_tgt_poll_group_000", 00:13:29.670 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1", 00:13:29.670 "listen_address": { 00:13:29.670 "trtype": "TCP", 00:13:29.670 "adrfam": "IPv4", 00:13:29.670 "traddr": "10.0.0.3", 00:13:29.670 "trsvcid": "4420" 00:13:29.670 }, 00:13:29.670 "peer_address": { 00:13:29.670 "trtype": "TCP", 00:13:29.670 "adrfam": "IPv4", 00:13:29.670 "traddr": "10.0.0.1", 00:13:29.670 "trsvcid": "59556" 00:13:29.670 }, 00:13:29.670 "auth": { 00:13:29.670 "state": "completed", 00:13:29.670 "digest": "sha384", 00:13:29.670 "dhgroup": "ffdhe2048" 00:13:29.670 } 00:13:29.670 } 00:13:29.670 ]' 00:13:29.670 19:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:29.928 19:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:29.928 19:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:29.928 19:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:29.928 19:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:29.928 19:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:29.928 19:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:29.928 19:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:30.185 19:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2FkMGI5YmI5ODdjZDEzZmU2YTM5NzNmNDg1NzdiYTVjOWQ1MmVjNzNmMmNlNzM5YmY0OGNiMjE3NjVkOTc1YXoxhJw=: 00:13:30.185 19:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --hostid bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -l 0 --dhchap-secret DHHC-1:03:M2FkMGI5YmI5ODdjZDEzZmU2YTM5NzNmNDg1NzdiYTVjOWQ1MmVjNzNmMmNlNzM5YmY0OGNiMjE3NjVkOTc1YXoxhJw=: 00:13:30.750 19:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:30.750 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:30.750 19:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:13:30.750 19:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.750 19:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:30.750 19:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.750 19:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:30.750 19:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:30.750 19:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:30.750 19:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:31.008 19:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:13:31.008 19:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:31.008 19:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:31.008 19:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:31.008 19:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:31.008 19:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:31.008 19:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:31.008 19:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.008 19:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.008 19:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.008 19:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:31.008 19:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:31.008 19:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:31.266 00:13:31.266 19:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:31.266 19:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:31.266 19:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:31.529 19:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:31.529 19:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:31.529 19:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.529 19:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.797 19:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.797 19:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:31.797 { 00:13:31.797 "cntlid": 65, 00:13:31.797 "qid": 0, 00:13:31.797 "state": "enabled", 00:13:31.797 "thread": "nvmf_tgt_poll_group_000", 00:13:31.797 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1", 00:13:31.797 "listen_address": { 00:13:31.797 "trtype": "TCP", 00:13:31.797 "adrfam": "IPv4", 00:13:31.797 "traddr": "10.0.0.3", 00:13:31.797 "trsvcid": "4420" 00:13:31.797 }, 00:13:31.797 "peer_address": { 00:13:31.797 "trtype": "TCP", 00:13:31.797 "adrfam": "IPv4", 00:13:31.797 "traddr": "10.0.0.1", 00:13:31.797 "trsvcid": "59592" 00:13:31.797 }, 00:13:31.797 "auth": { 00:13:31.797 "state": "completed", 00:13:31.797 "digest": "sha384", 00:13:31.797 "dhgroup": "ffdhe3072" 00:13:31.797 } 00:13:31.797 } 00:13:31.797 ]' 00:13:31.797 19:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:31.797 19:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:31.797 19:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:31.797 19:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:31.797 19:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:31.797 19:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:31.797 19:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:31.797 19:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:32.054 19:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDJhOWIwMDQyMmY0NjgxYzU0ZTY5ZDU0YmNhNjcwZGRjMDFhMzViOWQ5YTQwOGVieehMWQ==: --dhchap-ctrl-secret DHHC-1:03:NzcxZjJjNDBlMWFlMDIxNmE2OGM0OWVjNjc5ZTIwMjdhNjVkNjgzMmNjNWM3NGQxMDIwMTQzYWFlZGYzNWM5OMYeuPs=: 00:13:32.054 19:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --hostid bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -l 0 --dhchap-secret DHHC-1:00:ZDJhOWIwMDQyMmY0NjgxYzU0ZTY5ZDU0YmNhNjcwZGRjMDFhMzViOWQ5YTQwOGVieehMWQ==: --dhchap-ctrl-secret DHHC-1:03:NzcxZjJjNDBlMWFlMDIxNmE2OGM0OWVjNjc5ZTIwMjdhNjVkNjgzMmNjNWM3NGQxMDIwMTQzYWFlZGYzNWM5OMYeuPs=: 00:13:32.989 19:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:32.989 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:32.989 19:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:13:32.989 19:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.989 19:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:32.989 19:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.989 19:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:32.989 19:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:32.989 19:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:32.989 19:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:13:32.989 19:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:32.989 19:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:32.989 19:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:32.989 19:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:32.989 19:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:32.989 19:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:32.989 19:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.989 19:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.248 19:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.248 19:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:33.248 19:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:33.248 19:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:33.507 00:13:33.507 19:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:33.507 19:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:33.507 19:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:33.766 19:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:33.766 19:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:33.766 19:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.766 19:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.766 19:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.766 19:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:33.766 { 00:13:33.766 "cntlid": 67, 00:13:33.766 "qid": 0, 00:13:33.766 "state": "enabled", 00:13:33.766 "thread": "nvmf_tgt_poll_group_000", 00:13:33.766 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1", 00:13:33.766 "listen_address": { 00:13:33.766 "trtype": "TCP", 00:13:33.766 "adrfam": "IPv4", 00:13:33.766 "traddr": "10.0.0.3", 00:13:33.766 "trsvcid": "4420" 00:13:33.766 }, 00:13:33.766 "peer_address": { 00:13:33.766 "trtype": "TCP", 00:13:33.766 "adrfam": "IPv4", 00:13:33.766 "traddr": "10.0.0.1", 00:13:33.766 "trsvcid": "59622" 00:13:33.766 }, 00:13:33.766 "auth": { 00:13:33.766 "state": "completed", 00:13:33.766 "digest": "sha384", 00:13:33.766 "dhgroup": "ffdhe3072" 00:13:33.766 } 00:13:33.766 } 00:13:33.766 ]' 00:13:33.766 19:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:33.766 19:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:33.766 19:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:33.766 19:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:33.766 19:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:34.025 19:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:34.025 19:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:34.025 19:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:34.283 19:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2NhMzM5NWJkOTVlZGMwNzdjNjQ2NzBjMGJjOWU2ZGXFdSFf: --dhchap-ctrl-secret DHHC-1:02:NzRiNjdiOGZmZDQ1YjhiYzY2NGQ1NDE4OTkyMWVlNTgxZTNhNzI2OWVmNTc1OTJmPstlbA==: 00:13:34.283 19:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --hostid bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -l 0 --dhchap-secret DHHC-1:01:Y2NhMzM5NWJkOTVlZGMwNzdjNjQ2NzBjMGJjOWU2ZGXFdSFf: --dhchap-ctrl-secret DHHC-1:02:NzRiNjdiOGZmZDQ1YjhiYzY2NGQ1NDE4OTkyMWVlNTgxZTNhNzI2OWVmNTc1OTJmPstlbA==: 00:13:34.851 19:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:34.851 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:34.851 19:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:13:34.851 19:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.851 19:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:34.851 19:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.851 19:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:34.851 19:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:34.851 19:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:35.109 19:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:13:35.109 19:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:35.109 19:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:35.109 19:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:35.109 19:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:35.109 19:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:35.109 19:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:35.109 19:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.109 19:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:35.109 19:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.109 19:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:35.109 19:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:35.109 19:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:35.676 00:13:35.676 19:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:35.676 19:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:35.676 19:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:35.934 19:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:35.934 19:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:35.934 19:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.934 19:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:35.934 19:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.934 19:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:35.934 { 00:13:35.934 "cntlid": 69, 00:13:35.934 "qid": 0, 00:13:35.934 "state": "enabled", 00:13:35.934 "thread": "nvmf_tgt_poll_group_000", 00:13:35.934 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1", 00:13:35.934 "listen_address": { 00:13:35.934 "trtype": "TCP", 00:13:35.934 "adrfam": "IPv4", 00:13:35.934 "traddr": "10.0.0.3", 00:13:35.934 "trsvcid": "4420" 00:13:35.935 }, 00:13:35.935 "peer_address": { 00:13:35.935 "trtype": "TCP", 00:13:35.935 "adrfam": "IPv4", 00:13:35.935 "traddr": "10.0.0.1", 00:13:35.935 "trsvcid": "45676" 00:13:35.935 }, 00:13:35.935 "auth": { 00:13:35.935 "state": "completed", 00:13:35.935 "digest": "sha384", 00:13:35.935 "dhgroup": "ffdhe3072" 00:13:35.935 } 00:13:35.935 } 00:13:35.935 ]' 00:13:35.935 19:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:35.935 19:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:35.935 19:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:35.935 19:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:35.935 19:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:35.935 19:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:35.935 19:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:35.935 19:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:36.193 19:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmZhZTI5NTZjYjRkZjY4YzI3Zjc3MmYzNDRjNDMwNjViNGFmNGRmOGRiZjQ2N2FhSaZUaA==: --dhchap-ctrl-secret DHHC-1:01:NDYzOGIxMTVjM2QwYmNmZjViNDU0MzZhYzYxZjQxOGEW+wVW: 00:13:36.193 19:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --hostid bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -l 0 --dhchap-secret DHHC-1:02:ZmZhZTI5NTZjYjRkZjY4YzI3Zjc3MmYzNDRjNDMwNjViNGFmNGRmOGRiZjQ2N2FhSaZUaA==: --dhchap-ctrl-secret DHHC-1:01:NDYzOGIxMTVjM2QwYmNmZjViNDU0MzZhYzYxZjQxOGEW+wVW: 00:13:36.760 19:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:36.760 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:36.760 19:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:13:36.760 19:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.760 19:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.074 19:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.075 19:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:37.075 19:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:37.075 19:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:37.075 19:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:13:37.075 19:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:37.075 19:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:37.075 19:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:37.075 19:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:37.075 19:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:37.075 19:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --dhchap-key key3 00:13:37.075 19:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.075 19:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.075 19:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.075 19:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:37.075 19:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:37.075 19:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:37.333 00:13:37.592 19:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:37.592 19:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:37.592 19:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:37.852 19:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:37.852 19:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:37.852 19:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.852 19:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.852 19:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.852 19:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:37.852 { 00:13:37.852 "cntlid": 71, 00:13:37.852 "qid": 0, 00:13:37.852 "state": "enabled", 00:13:37.852 "thread": "nvmf_tgt_poll_group_000", 00:13:37.852 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1", 00:13:37.852 "listen_address": { 00:13:37.852 "trtype": "TCP", 00:13:37.852 "adrfam": "IPv4", 00:13:37.852 "traddr": "10.0.0.3", 00:13:37.852 "trsvcid": "4420" 00:13:37.852 }, 00:13:37.852 "peer_address": { 00:13:37.852 "trtype": "TCP", 00:13:37.852 "adrfam": "IPv4", 00:13:37.852 "traddr": "10.0.0.1", 00:13:37.852 "trsvcid": "45698" 00:13:37.852 }, 00:13:37.852 "auth": { 00:13:37.852 "state": "completed", 00:13:37.852 "digest": "sha384", 00:13:37.852 "dhgroup": "ffdhe3072" 00:13:37.852 } 00:13:37.852 } 00:13:37.852 ]' 00:13:37.852 19:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:37.852 19:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:37.852 19:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:37.852 19:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:37.852 19:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:37.852 19:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:37.852 19:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:37.852 19:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:38.122 19:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2FkMGI5YmI5ODdjZDEzZmU2YTM5NzNmNDg1NzdiYTVjOWQ1MmVjNzNmMmNlNzM5YmY0OGNiMjE3NjVkOTc1YXoxhJw=: 00:13:38.122 19:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --hostid bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -l 0 --dhchap-secret DHHC-1:03:M2FkMGI5YmI5ODdjZDEzZmU2YTM5NzNmNDg1NzdiYTVjOWQ1MmVjNzNmMmNlNzM5YmY0OGNiMjE3NjVkOTc1YXoxhJw=: 00:13:38.689 19:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:38.689 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:38.689 19:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:13:38.689 19:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.689 19:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.689 19:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.689 19:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:38.689 19:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:38.689 19:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:38.689 19:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:38.948 19:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:13:38.948 19:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:38.948 19:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:38.948 19:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:38.948 19:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:38.948 19:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:38.948 19:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:38.948 19:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.948 19:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.948 19:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.948 19:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:38.948 19:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:38.948 19:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:39.516 00:13:39.516 19:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:39.516 19:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:39.516 19:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:39.516 19:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:39.516 19:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:39.516 19:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.516 19:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.516 19:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.516 19:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:39.516 { 00:13:39.516 "cntlid": 73, 00:13:39.516 "qid": 0, 00:13:39.516 "state": "enabled", 00:13:39.516 "thread": "nvmf_tgt_poll_group_000", 00:13:39.516 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1", 00:13:39.516 "listen_address": { 00:13:39.516 "trtype": "TCP", 00:13:39.516 "adrfam": "IPv4", 00:13:39.516 "traddr": "10.0.0.3", 00:13:39.516 "trsvcid": "4420" 00:13:39.516 }, 00:13:39.516 "peer_address": { 00:13:39.516 "trtype": "TCP", 00:13:39.516 "adrfam": "IPv4", 00:13:39.516 "traddr": "10.0.0.1", 00:13:39.516 "trsvcid": "45736" 00:13:39.516 }, 00:13:39.516 "auth": { 00:13:39.516 "state": "completed", 00:13:39.516 "digest": "sha384", 00:13:39.516 "dhgroup": "ffdhe4096" 00:13:39.517 } 00:13:39.517 } 00:13:39.517 ]' 00:13:39.517 19:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:39.776 19:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:39.776 19:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:39.776 19:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:39.776 19:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:39.776 19:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:39.776 19:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:39.776 19:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:40.035 19:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDJhOWIwMDQyMmY0NjgxYzU0ZTY5ZDU0YmNhNjcwZGRjMDFhMzViOWQ5YTQwOGVieehMWQ==: --dhchap-ctrl-secret DHHC-1:03:NzcxZjJjNDBlMWFlMDIxNmE2OGM0OWVjNjc5ZTIwMjdhNjVkNjgzMmNjNWM3NGQxMDIwMTQzYWFlZGYzNWM5OMYeuPs=: 00:13:40.035 19:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --hostid bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -l 0 --dhchap-secret DHHC-1:00:ZDJhOWIwMDQyMmY0NjgxYzU0ZTY5ZDU0YmNhNjcwZGRjMDFhMzViOWQ5YTQwOGVieehMWQ==: --dhchap-ctrl-secret DHHC-1:03:NzcxZjJjNDBlMWFlMDIxNmE2OGM0OWVjNjc5ZTIwMjdhNjVkNjgzMmNjNWM3NGQxMDIwMTQzYWFlZGYzNWM5OMYeuPs=: 00:13:40.606 19:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:40.606 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:40.606 19:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:13:40.606 19:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.606 19:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:40.606 19:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.606 19:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:40.606 19:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:40.606 19:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:40.866 19:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:13:40.866 19:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:40.866 19:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:40.866 19:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:40.866 19:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:40.866 19:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:40.866 19:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:40.866 19:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.866 19:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:40.866 19:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.866 19:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:40.866 19:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:40.866 19:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:41.126 00:13:41.126 19:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:41.126 19:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:41.126 19:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:41.697 19:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:41.697 19:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:41.697 19:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.697 19:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:41.697 19:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.697 19:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:41.697 { 00:13:41.697 "cntlid": 75, 00:13:41.697 "qid": 0, 00:13:41.697 "state": "enabled", 00:13:41.697 "thread": "nvmf_tgt_poll_group_000", 00:13:41.697 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1", 00:13:41.697 "listen_address": { 00:13:41.697 "trtype": "TCP", 00:13:41.697 "adrfam": "IPv4", 00:13:41.697 "traddr": "10.0.0.3", 00:13:41.697 "trsvcid": "4420" 00:13:41.697 }, 00:13:41.697 "peer_address": { 00:13:41.697 "trtype": "TCP", 00:13:41.697 "adrfam": "IPv4", 00:13:41.697 "traddr": "10.0.0.1", 00:13:41.697 "trsvcid": "45766" 00:13:41.697 }, 00:13:41.697 "auth": { 00:13:41.697 "state": "completed", 00:13:41.697 "digest": "sha384", 00:13:41.697 "dhgroup": "ffdhe4096" 00:13:41.697 } 00:13:41.697 } 00:13:41.697 ]' 00:13:41.697 19:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:41.697 19:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:41.697 19:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:41.697 19:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:41.697 19:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:41.697 19:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:41.697 19:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:41.697 19:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:41.955 19:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2NhMzM5NWJkOTVlZGMwNzdjNjQ2NzBjMGJjOWU2ZGXFdSFf: --dhchap-ctrl-secret DHHC-1:02:NzRiNjdiOGZmZDQ1YjhiYzY2NGQ1NDE4OTkyMWVlNTgxZTNhNzI2OWVmNTc1OTJmPstlbA==: 00:13:41.955 19:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --hostid bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -l 0 --dhchap-secret DHHC-1:01:Y2NhMzM5NWJkOTVlZGMwNzdjNjQ2NzBjMGJjOWU2ZGXFdSFf: --dhchap-ctrl-secret DHHC-1:02:NzRiNjdiOGZmZDQ1YjhiYzY2NGQ1NDE4OTkyMWVlNTgxZTNhNzI2OWVmNTc1OTJmPstlbA==: 00:13:42.523 19:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:42.523 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:42.523 19:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:13:42.523 19:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.523 19:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:42.523 19:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.523 19:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:42.523 19:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:42.523 19:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:42.787 19:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:13:42.787 19:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:42.787 19:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:42.787 19:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:42.787 19:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:42.787 19:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:42.787 19:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:42.787 19:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.787 19:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:42.787 19:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.787 19:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:42.787 19:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:42.787 19:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:43.046 00:13:43.046 19:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:43.046 19:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:43.046 19:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:43.614 19:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:43.614 19:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:43.614 19:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.614 19:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:43.614 19:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.614 19:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:43.614 { 00:13:43.614 "cntlid": 77, 00:13:43.614 "qid": 0, 00:13:43.614 "state": "enabled", 00:13:43.614 "thread": "nvmf_tgt_poll_group_000", 00:13:43.614 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1", 00:13:43.614 "listen_address": { 00:13:43.614 "trtype": "TCP", 00:13:43.614 "adrfam": "IPv4", 00:13:43.614 "traddr": "10.0.0.3", 00:13:43.614 "trsvcid": "4420" 00:13:43.614 }, 00:13:43.614 "peer_address": { 00:13:43.614 "trtype": "TCP", 00:13:43.614 "adrfam": "IPv4", 00:13:43.614 "traddr": "10.0.0.1", 00:13:43.614 "trsvcid": "45802" 00:13:43.614 }, 00:13:43.614 "auth": { 00:13:43.614 "state": "completed", 00:13:43.614 "digest": "sha384", 00:13:43.614 "dhgroup": "ffdhe4096" 00:13:43.614 } 00:13:43.614 } 00:13:43.614 ]' 00:13:43.614 19:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:43.614 19:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:43.614 19:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:43.614 19:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:43.614 19:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:43.614 19:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:43.614 19:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:43.614 19:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:43.874 19:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmZhZTI5NTZjYjRkZjY4YzI3Zjc3MmYzNDRjNDMwNjViNGFmNGRmOGRiZjQ2N2FhSaZUaA==: --dhchap-ctrl-secret DHHC-1:01:NDYzOGIxMTVjM2QwYmNmZjViNDU0MzZhYzYxZjQxOGEW+wVW: 00:13:43.874 19:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --hostid bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -l 0 --dhchap-secret DHHC-1:02:ZmZhZTI5NTZjYjRkZjY4YzI3Zjc3MmYzNDRjNDMwNjViNGFmNGRmOGRiZjQ2N2FhSaZUaA==: --dhchap-ctrl-secret DHHC-1:01:NDYzOGIxMTVjM2QwYmNmZjViNDU0MzZhYzYxZjQxOGEW+wVW: 00:13:44.442 19:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:44.442 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:44.442 19:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:13:44.442 19:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.442 19:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.442 19:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.442 19:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:44.442 19:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:44.442 19:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:44.702 19:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:13:44.702 19:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:44.702 19:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:44.702 19:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:44.702 19:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:44.702 19:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:44.702 19:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --dhchap-key key3 00:13:44.702 19:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.702 19:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.961 19:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.961 19:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:44.961 19:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:44.961 19:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:45.220 00:13:45.220 19:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:45.220 19:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:45.220 19:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:45.479 19:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:45.479 19:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:45.479 19:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.479 19:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.479 19:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.479 19:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:45.479 { 00:13:45.479 "cntlid": 79, 00:13:45.479 "qid": 0, 00:13:45.479 "state": "enabled", 00:13:45.479 "thread": "nvmf_tgt_poll_group_000", 00:13:45.479 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1", 00:13:45.479 "listen_address": { 00:13:45.479 "trtype": "TCP", 00:13:45.479 "adrfam": "IPv4", 00:13:45.479 "traddr": "10.0.0.3", 00:13:45.479 "trsvcid": "4420" 00:13:45.479 }, 00:13:45.479 "peer_address": { 00:13:45.479 "trtype": "TCP", 00:13:45.479 "adrfam": "IPv4", 00:13:45.479 "traddr": "10.0.0.1", 00:13:45.479 "trsvcid": "54398" 00:13:45.479 }, 00:13:45.479 "auth": { 00:13:45.479 "state": "completed", 00:13:45.479 "digest": "sha384", 00:13:45.479 "dhgroup": "ffdhe4096" 00:13:45.479 } 00:13:45.479 } 00:13:45.479 ]' 00:13:45.479 19:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:45.480 19:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:45.480 19:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:45.739 19:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:45.739 19:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:45.739 19:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:45.739 19:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:45.739 19:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:45.998 19:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2FkMGI5YmI5ODdjZDEzZmU2YTM5NzNmNDg1NzdiYTVjOWQ1MmVjNzNmMmNlNzM5YmY0OGNiMjE3NjVkOTc1YXoxhJw=: 00:13:45.998 19:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --hostid bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -l 0 --dhchap-secret DHHC-1:03:M2FkMGI5YmI5ODdjZDEzZmU2YTM5NzNmNDg1NzdiYTVjOWQ1MmVjNzNmMmNlNzM5YmY0OGNiMjE3NjVkOTc1YXoxhJw=: 00:13:46.667 19:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:46.667 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:46.667 19:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:13:46.667 19:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.667 19:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:46.667 19:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.667 19:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:46.667 19:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:46.667 19:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:46.667 19:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:46.971 19:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:13:46.971 19:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:46.971 19:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:46.971 19:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:46.971 19:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:46.971 19:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:46.971 19:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:46.971 19:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.971 19:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:46.971 19:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.971 19:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:46.971 19:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:46.971 19:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:47.539 00:13:47.539 19:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:47.539 19:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:47.539 19:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:47.798 19:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:47.798 19:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:47.798 19:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.798 19:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:47.798 19:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.798 19:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:47.798 { 00:13:47.798 "cntlid": 81, 00:13:47.798 "qid": 0, 00:13:47.798 "state": "enabled", 00:13:47.798 "thread": "nvmf_tgt_poll_group_000", 00:13:47.798 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1", 00:13:47.798 "listen_address": { 00:13:47.798 "trtype": "TCP", 00:13:47.798 "adrfam": "IPv4", 00:13:47.798 "traddr": "10.0.0.3", 00:13:47.798 "trsvcid": "4420" 00:13:47.798 }, 00:13:47.798 "peer_address": { 00:13:47.798 "trtype": "TCP", 00:13:47.798 "adrfam": "IPv4", 00:13:47.798 "traddr": "10.0.0.1", 00:13:47.798 "trsvcid": "54426" 00:13:47.798 }, 00:13:47.798 "auth": { 00:13:47.798 "state": "completed", 00:13:47.798 "digest": "sha384", 00:13:47.798 "dhgroup": "ffdhe6144" 00:13:47.798 } 00:13:47.798 } 00:13:47.798 ]' 00:13:47.798 19:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:47.798 19:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:47.798 19:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:47.798 19:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:47.798 19:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:47.798 19:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:47.798 19:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:47.798 19:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:48.056 19:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDJhOWIwMDQyMmY0NjgxYzU0ZTY5ZDU0YmNhNjcwZGRjMDFhMzViOWQ5YTQwOGVieehMWQ==: --dhchap-ctrl-secret DHHC-1:03:NzcxZjJjNDBlMWFlMDIxNmE2OGM0OWVjNjc5ZTIwMjdhNjVkNjgzMmNjNWM3NGQxMDIwMTQzYWFlZGYzNWM5OMYeuPs=: 00:13:48.056 19:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --hostid bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -l 0 --dhchap-secret DHHC-1:00:ZDJhOWIwMDQyMmY0NjgxYzU0ZTY5ZDU0YmNhNjcwZGRjMDFhMzViOWQ5YTQwOGVieehMWQ==: --dhchap-ctrl-secret DHHC-1:03:NzcxZjJjNDBlMWFlMDIxNmE2OGM0OWVjNjc5ZTIwMjdhNjVkNjgzMmNjNWM3NGQxMDIwMTQzYWFlZGYzNWM5OMYeuPs=: 00:13:48.666 19:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:48.924 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:48.924 19:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:13:48.924 19:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.924 19:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.924 19:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.924 19:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:48.924 19:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:48.924 19:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:49.182 19:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:13:49.182 19:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:49.182 19:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:49.182 19:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:49.182 19:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:49.182 19:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:49.182 19:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:49.182 19:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.182 19:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.182 19:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.182 19:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:49.182 19:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:49.182 19:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:49.749 00:13:49.749 19:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:49.749 19:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:49.750 19:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:50.008 19:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:50.008 19:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:50.008 19:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.008 19:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:50.008 19:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.009 19:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:50.009 { 00:13:50.009 "cntlid": 83, 00:13:50.009 "qid": 0, 00:13:50.009 "state": "enabled", 00:13:50.009 "thread": "nvmf_tgt_poll_group_000", 00:13:50.009 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1", 00:13:50.009 "listen_address": { 00:13:50.009 "trtype": "TCP", 00:13:50.009 "adrfam": "IPv4", 00:13:50.009 "traddr": "10.0.0.3", 00:13:50.009 "trsvcid": "4420" 00:13:50.009 }, 00:13:50.009 "peer_address": { 00:13:50.009 "trtype": "TCP", 00:13:50.009 "adrfam": "IPv4", 00:13:50.009 "traddr": "10.0.0.1", 00:13:50.009 "trsvcid": "54444" 00:13:50.009 }, 00:13:50.009 "auth": { 00:13:50.009 "state": "completed", 00:13:50.009 "digest": "sha384", 00:13:50.009 "dhgroup": "ffdhe6144" 00:13:50.009 } 00:13:50.009 } 00:13:50.009 ]' 00:13:50.009 19:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:50.009 19:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:50.009 19:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:50.009 19:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:50.009 19:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:50.009 19:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:50.009 19:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:50.009 19:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:50.268 19:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2NhMzM5NWJkOTVlZGMwNzdjNjQ2NzBjMGJjOWU2ZGXFdSFf: --dhchap-ctrl-secret DHHC-1:02:NzRiNjdiOGZmZDQ1YjhiYzY2NGQ1NDE4OTkyMWVlNTgxZTNhNzI2OWVmNTc1OTJmPstlbA==: 00:13:50.268 19:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --hostid bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -l 0 --dhchap-secret DHHC-1:01:Y2NhMzM5NWJkOTVlZGMwNzdjNjQ2NzBjMGJjOWU2ZGXFdSFf: --dhchap-ctrl-secret DHHC-1:02:NzRiNjdiOGZmZDQ1YjhiYzY2NGQ1NDE4OTkyMWVlNTgxZTNhNzI2OWVmNTc1OTJmPstlbA==: 00:13:51.229 19:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:51.229 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:51.229 19:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:13:51.229 19:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.229 19:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.229 19:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.229 19:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:51.229 19:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:51.229 19:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:51.487 19:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:13:51.487 19:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:51.487 19:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:51.487 19:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:51.487 19:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:51.487 19:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:51.487 19:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:51.487 19:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.487 19:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.487 19:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.487 19:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:51.487 19:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:51.487 19:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:51.745 00:13:51.745 19:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:51.745 19:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:51.745 19:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:52.002 19:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:52.002 19:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:52.002 19:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.002 19:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.002 19:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.002 19:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:52.002 { 00:13:52.002 "cntlid": 85, 00:13:52.002 "qid": 0, 00:13:52.002 "state": "enabled", 00:13:52.002 "thread": "nvmf_tgt_poll_group_000", 00:13:52.002 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1", 00:13:52.002 "listen_address": { 00:13:52.002 "trtype": "TCP", 00:13:52.002 "adrfam": "IPv4", 00:13:52.002 "traddr": "10.0.0.3", 00:13:52.002 "trsvcid": "4420" 00:13:52.002 }, 00:13:52.002 "peer_address": { 00:13:52.002 "trtype": "TCP", 00:13:52.002 "adrfam": "IPv4", 00:13:52.002 "traddr": "10.0.0.1", 00:13:52.002 "trsvcid": "54488" 00:13:52.002 }, 00:13:52.002 "auth": { 00:13:52.002 "state": "completed", 00:13:52.002 "digest": "sha384", 00:13:52.002 "dhgroup": "ffdhe6144" 00:13:52.002 } 00:13:52.002 } 00:13:52.002 ]' 00:13:52.002 19:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:52.002 19:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:52.002 19:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:52.259 19:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:52.259 19:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:52.259 19:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:52.259 19:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:52.259 19:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:52.518 19:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmZhZTI5NTZjYjRkZjY4YzI3Zjc3MmYzNDRjNDMwNjViNGFmNGRmOGRiZjQ2N2FhSaZUaA==: --dhchap-ctrl-secret DHHC-1:01:NDYzOGIxMTVjM2QwYmNmZjViNDU0MzZhYzYxZjQxOGEW+wVW: 00:13:52.518 19:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --hostid bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -l 0 --dhchap-secret DHHC-1:02:ZmZhZTI5NTZjYjRkZjY4YzI3Zjc3MmYzNDRjNDMwNjViNGFmNGRmOGRiZjQ2N2FhSaZUaA==: --dhchap-ctrl-secret DHHC-1:01:NDYzOGIxMTVjM2QwYmNmZjViNDU0MzZhYzYxZjQxOGEW+wVW: 00:13:53.084 19:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:53.084 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:53.084 19:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:13:53.084 19:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.084 19:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:53.084 19:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.084 19:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:53.085 19:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:53.085 19:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:53.343 19:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:13:53.343 19:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:53.343 19:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:53.343 19:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:53.343 19:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:53.343 19:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:53.343 19:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --dhchap-key key3 00:13:53.343 19:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.343 19:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:53.343 19:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.343 19:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:53.343 19:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:53.343 19:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:53.912 00:13:53.912 19:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:53.912 19:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:53.912 19:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:54.171 19:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:54.171 19:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:54.171 19:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.171 19:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.171 19:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.171 19:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:54.171 { 00:13:54.171 "cntlid": 87, 00:13:54.171 "qid": 0, 00:13:54.171 "state": "enabled", 00:13:54.171 "thread": "nvmf_tgt_poll_group_000", 00:13:54.171 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1", 00:13:54.171 "listen_address": { 00:13:54.171 "trtype": "TCP", 00:13:54.171 "adrfam": "IPv4", 00:13:54.171 "traddr": "10.0.0.3", 00:13:54.171 "trsvcid": "4420" 00:13:54.171 }, 00:13:54.171 "peer_address": { 00:13:54.171 "trtype": "TCP", 00:13:54.171 "adrfam": "IPv4", 00:13:54.171 "traddr": "10.0.0.1", 00:13:54.171 "trsvcid": "54522" 00:13:54.171 }, 00:13:54.171 "auth": { 00:13:54.171 "state": "completed", 00:13:54.171 "digest": "sha384", 00:13:54.171 "dhgroup": "ffdhe6144" 00:13:54.171 } 00:13:54.171 } 00:13:54.171 ]' 00:13:54.171 19:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:54.171 19:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:54.171 19:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:54.171 19:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:54.171 19:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:54.430 19:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:54.430 19:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:54.430 19:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:54.690 19:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2FkMGI5YmI5ODdjZDEzZmU2YTM5NzNmNDg1NzdiYTVjOWQ1MmVjNzNmMmNlNzM5YmY0OGNiMjE3NjVkOTc1YXoxhJw=: 00:13:54.690 19:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --hostid bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -l 0 --dhchap-secret DHHC-1:03:M2FkMGI5YmI5ODdjZDEzZmU2YTM5NzNmNDg1NzdiYTVjOWQ1MmVjNzNmMmNlNzM5YmY0OGNiMjE3NjVkOTc1YXoxhJw=: 00:13:55.258 19:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:55.258 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:55.258 19:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:13:55.258 19:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.258 19:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:55.258 19:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.258 19:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:55.258 19:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:55.258 19:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:55.258 19:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:55.518 19:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:13:55.518 19:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:55.518 19:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:55.518 19:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:55.518 19:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:55.518 19:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:55.518 19:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:55.518 19:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.518 19:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:55.518 19:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.518 19:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:55.518 19:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:55.518 19:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:56.089 00:13:56.089 19:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:56.089 19:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:56.089 19:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:56.346 19:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:56.346 19:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:56.346 19:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.346 19:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.346 19:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.346 19:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:56.346 { 00:13:56.346 "cntlid": 89, 00:13:56.346 "qid": 0, 00:13:56.346 "state": "enabled", 00:13:56.346 "thread": "nvmf_tgt_poll_group_000", 00:13:56.346 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1", 00:13:56.346 "listen_address": { 00:13:56.346 "trtype": "TCP", 00:13:56.346 "adrfam": "IPv4", 00:13:56.346 "traddr": "10.0.0.3", 00:13:56.346 "trsvcid": "4420" 00:13:56.346 }, 00:13:56.346 "peer_address": { 00:13:56.346 "trtype": "TCP", 00:13:56.346 "adrfam": "IPv4", 00:13:56.346 "traddr": "10.0.0.1", 00:13:56.346 "trsvcid": "36812" 00:13:56.346 }, 00:13:56.346 "auth": { 00:13:56.346 "state": "completed", 00:13:56.346 "digest": "sha384", 00:13:56.346 "dhgroup": "ffdhe8192" 00:13:56.346 } 00:13:56.346 } 00:13:56.346 ]' 00:13:56.346 19:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:56.346 19:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:56.346 19:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:56.346 19:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:56.346 19:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:56.346 19:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:56.346 19:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:56.347 19:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:56.913 19:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDJhOWIwMDQyMmY0NjgxYzU0ZTY5ZDU0YmNhNjcwZGRjMDFhMzViOWQ5YTQwOGVieehMWQ==: --dhchap-ctrl-secret DHHC-1:03:NzcxZjJjNDBlMWFlMDIxNmE2OGM0OWVjNjc5ZTIwMjdhNjVkNjgzMmNjNWM3NGQxMDIwMTQzYWFlZGYzNWM5OMYeuPs=: 00:13:56.913 19:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --hostid bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -l 0 --dhchap-secret DHHC-1:00:ZDJhOWIwMDQyMmY0NjgxYzU0ZTY5ZDU0YmNhNjcwZGRjMDFhMzViOWQ5YTQwOGVieehMWQ==: --dhchap-ctrl-secret DHHC-1:03:NzcxZjJjNDBlMWFlMDIxNmE2OGM0OWVjNjc5ZTIwMjdhNjVkNjgzMmNjNWM3NGQxMDIwMTQzYWFlZGYzNWM5OMYeuPs=: 00:13:57.481 19:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:57.481 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:57.481 19:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:13:57.481 19:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.481 19:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:57.481 19:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.481 19:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:57.481 19:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:57.481 19:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:57.740 19:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:13:57.740 19:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:57.740 19:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:57.740 19:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:57.740 19:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:57.740 19:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:57.740 19:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:57.740 19:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.740 19:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:57.740 19:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.740 19:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:57.740 19:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:57.740 19:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:58.307 00:13:58.307 19:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:58.307 19:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:58.307 19:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:58.567 19:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:58.567 19:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:58.567 19:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.567 19:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.567 19:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.567 19:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:58.567 { 00:13:58.567 "cntlid": 91, 00:13:58.567 "qid": 0, 00:13:58.567 "state": "enabled", 00:13:58.567 "thread": "nvmf_tgt_poll_group_000", 00:13:58.567 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1", 00:13:58.567 "listen_address": { 00:13:58.567 "trtype": "TCP", 00:13:58.567 "adrfam": "IPv4", 00:13:58.567 "traddr": "10.0.0.3", 00:13:58.567 "trsvcid": "4420" 00:13:58.567 }, 00:13:58.567 "peer_address": { 00:13:58.567 "trtype": "TCP", 00:13:58.567 "adrfam": "IPv4", 00:13:58.567 "traddr": "10.0.0.1", 00:13:58.567 "trsvcid": "36834" 00:13:58.567 }, 00:13:58.567 "auth": { 00:13:58.567 "state": "completed", 00:13:58.567 "digest": "sha384", 00:13:58.567 "dhgroup": "ffdhe8192" 00:13:58.567 } 00:13:58.567 } 00:13:58.567 ]' 00:13:58.567 19:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:58.567 19:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:58.567 19:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:58.567 19:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:58.567 19:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:58.567 19:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:58.567 19:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:58.567 19:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:59.136 19:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2NhMzM5NWJkOTVlZGMwNzdjNjQ2NzBjMGJjOWU2ZGXFdSFf: --dhchap-ctrl-secret DHHC-1:02:NzRiNjdiOGZmZDQ1YjhiYzY2NGQ1NDE4OTkyMWVlNTgxZTNhNzI2OWVmNTc1OTJmPstlbA==: 00:13:59.136 19:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --hostid bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -l 0 --dhchap-secret DHHC-1:01:Y2NhMzM5NWJkOTVlZGMwNzdjNjQ2NzBjMGJjOWU2ZGXFdSFf: --dhchap-ctrl-secret DHHC-1:02:NzRiNjdiOGZmZDQ1YjhiYzY2NGQ1NDE4OTkyMWVlNTgxZTNhNzI2OWVmNTc1OTJmPstlbA==: 00:13:59.714 19:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:59.714 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:59.714 19:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:13:59.714 19:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.714 19:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.714 19:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.714 19:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:59.714 19:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:59.714 19:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:59.995 19:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:13:59.995 19:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:59.995 19:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:59.995 19:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:59.995 19:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:59.995 19:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:59.995 19:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:59.995 19:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.995 19:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.995 19:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.995 19:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:59.995 19:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:59.995 19:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:00.562 00:14:00.562 19:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:00.562 19:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:00.562 19:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:00.821 19:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:00.821 19:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:00.821 19:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.821 19:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:00.821 19:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.821 19:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:00.821 { 00:14:00.821 "cntlid": 93, 00:14:00.821 "qid": 0, 00:14:00.821 "state": "enabled", 00:14:00.821 "thread": "nvmf_tgt_poll_group_000", 00:14:00.821 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1", 00:14:00.821 "listen_address": { 00:14:00.821 "trtype": "TCP", 00:14:00.821 "adrfam": "IPv4", 00:14:00.821 "traddr": "10.0.0.3", 00:14:00.821 "trsvcid": "4420" 00:14:00.821 }, 00:14:00.821 "peer_address": { 00:14:00.821 "trtype": "TCP", 00:14:00.821 "adrfam": "IPv4", 00:14:00.821 "traddr": "10.0.0.1", 00:14:00.821 "trsvcid": "36874" 00:14:00.821 }, 00:14:00.821 "auth": { 00:14:00.821 "state": "completed", 00:14:00.821 "digest": "sha384", 00:14:00.821 "dhgroup": "ffdhe8192" 00:14:00.821 } 00:14:00.821 } 00:14:00.821 ]' 00:14:00.821 19:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:01.080 19:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:01.080 19:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:01.080 19:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:01.080 19:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:01.080 19:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:01.080 19:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:01.080 19:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:01.339 19:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmZhZTI5NTZjYjRkZjY4YzI3Zjc3MmYzNDRjNDMwNjViNGFmNGRmOGRiZjQ2N2FhSaZUaA==: --dhchap-ctrl-secret DHHC-1:01:NDYzOGIxMTVjM2QwYmNmZjViNDU0MzZhYzYxZjQxOGEW+wVW: 00:14:01.339 19:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --hostid bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -l 0 --dhchap-secret DHHC-1:02:ZmZhZTI5NTZjYjRkZjY4YzI3Zjc3MmYzNDRjNDMwNjViNGFmNGRmOGRiZjQ2N2FhSaZUaA==: --dhchap-ctrl-secret DHHC-1:01:NDYzOGIxMTVjM2QwYmNmZjViNDU0MzZhYzYxZjQxOGEW+wVW: 00:14:01.904 19:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:01.904 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:01.904 19:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:14:01.904 19:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.904 19:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.904 19:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.904 19:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:01.904 19:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:01.904 19:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:02.161 19:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:14:02.161 19:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:02.161 19:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:02.161 19:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:02.161 19:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:02.161 19:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:02.161 19:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --dhchap-key key3 00:14:02.161 19:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.161 19:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.161 19:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.161 19:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:02.161 19:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:02.161 19:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:02.726 00:14:02.726 19:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:02.726 19:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:02.726 19:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:03.292 19:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:03.292 19:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:03.292 19:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.292 19:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.292 19:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.292 19:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:03.292 { 00:14:03.292 "cntlid": 95, 00:14:03.292 "qid": 0, 00:14:03.292 "state": "enabled", 00:14:03.292 "thread": "nvmf_tgt_poll_group_000", 00:14:03.292 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1", 00:14:03.292 "listen_address": { 00:14:03.292 "trtype": "TCP", 00:14:03.292 "adrfam": "IPv4", 00:14:03.292 "traddr": "10.0.0.3", 00:14:03.292 "trsvcid": "4420" 00:14:03.292 }, 00:14:03.292 "peer_address": { 00:14:03.292 "trtype": "TCP", 00:14:03.292 "adrfam": "IPv4", 00:14:03.292 "traddr": "10.0.0.1", 00:14:03.292 "trsvcid": "36888" 00:14:03.292 }, 00:14:03.292 "auth": { 00:14:03.292 "state": "completed", 00:14:03.292 "digest": "sha384", 00:14:03.292 "dhgroup": "ffdhe8192" 00:14:03.292 } 00:14:03.292 } 00:14:03.292 ]' 00:14:03.292 19:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:03.293 19:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:03.293 19:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:03.293 19:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:03.293 19:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:03.293 19:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:03.293 19:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:03.293 19:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:03.550 19:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2FkMGI5YmI5ODdjZDEzZmU2YTM5NzNmNDg1NzdiYTVjOWQ1MmVjNzNmMmNlNzM5YmY0OGNiMjE3NjVkOTc1YXoxhJw=: 00:14:03.550 19:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --hostid bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -l 0 --dhchap-secret DHHC-1:03:M2FkMGI5YmI5ODdjZDEzZmU2YTM5NzNmNDg1NzdiYTVjOWQ1MmVjNzNmMmNlNzM5YmY0OGNiMjE3NjVkOTc1YXoxhJw=: 00:14:04.117 19:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:04.117 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:04.117 19:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:14:04.117 19:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.117 19:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.117 19:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.117 19:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:14:04.117 19:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:04.117 19:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:04.117 19:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:04.117 19:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:04.376 19:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:14:04.376 19:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:04.376 19:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:04.376 19:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:04.376 19:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:04.376 19:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:04.376 19:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:04.376 19:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.376 19:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.376 19:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.376 19:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:04.376 19:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:04.376 19:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:04.634 00:14:04.634 19:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:04.634 19:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:04.634 19:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:04.892 19:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:04.892 19:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:04.892 19:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.892 19:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:05.149 19:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.149 19:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:05.149 { 00:14:05.149 "cntlid": 97, 00:14:05.149 "qid": 0, 00:14:05.149 "state": "enabled", 00:14:05.149 "thread": "nvmf_tgt_poll_group_000", 00:14:05.149 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1", 00:14:05.149 "listen_address": { 00:14:05.149 "trtype": "TCP", 00:14:05.149 "adrfam": "IPv4", 00:14:05.149 "traddr": "10.0.0.3", 00:14:05.149 "trsvcid": "4420" 00:14:05.149 }, 00:14:05.149 "peer_address": { 00:14:05.149 "trtype": "TCP", 00:14:05.149 "adrfam": "IPv4", 00:14:05.149 "traddr": "10.0.0.1", 00:14:05.149 "trsvcid": "36914" 00:14:05.149 }, 00:14:05.149 "auth": { 00:14:05.149 "state": "completed", 00:14:05.149 "digest": "sha512", 00:14:05.149 "dhgroup": "null" 00:14:05.149 } 00:14:05.149 } 00:14:05.149 ]' 00:14:05.149 19:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:05.149 19:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:05.149 19:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:05.149 19:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:05.149 19:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:05.149 19:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:05.149 19:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:05.149 19:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:05.406 19:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDJhOWIwMDQyMmY0NjgxYzU0ZTY5ZDU0YmNhNjcwZGRjMDFhMzViOWQ5YTQwOGVieehMWQ==: --dhchap-ctrl-secret DHHC-1:03:NzcxZjJjNDBlMWFlMDIxNmE2OGM0OWVjNjc5ZTIwMjdhNjVkNjgzMmNjNWM3NGQxMDIwMTQzYWFlZGYzNWM5OMYeuPs=: 00:14:05.406 19:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --hostid bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -l 0 --dhchap-secret DHHC-1:00:ZDJhOWIwMDQyMmY0NjgxYzU0ZTY5ZDU0YmNhNjcwZGRjMDFhMzViOWQ5YTQwOGVieehMWQ==: --dhchap-ctrl-secret DHHC-1:03:NzcxZjJjNDBlMWFlMDIxNmE2OGM0OWVjNjc5ZTIwMjdhNjVkNjgzMmNjNWM3NGQxMDIwMTQzYWFlZGYzNWM5OMYeuPs=: 00:14:05.970 19:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:05.970 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:05.970 19:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:14:05.970 19:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.970 19:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:05.970 19:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.970 19:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:05.970 19:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:05.970 19:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:06.535 19:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:14:06.535 19:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:06.535 19:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:06.535 19:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:06.535 19:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:06.535 19:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:06.535 19:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:06.535 19:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.535 19:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:06.535 19:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.535 19:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:06.535 19:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:06.535 19:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:06.793 00:14:06.793 19:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:06.793 19:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:06.793 19:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:07.051 19:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:07.051 19:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:07.051 19:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.051 19:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.051 19:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.051 19:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:07.051 { 00:14:07.051 "cntlid": 99, 00:14:07.051 "qid": 0, 00:14:07.051 "state": "enabled", 00:14:07.051 "thread": "nvmf_tgt_poll_group_000", 00:14:07.051 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1", 00:14:07.051 "listen_address": { 00:14:07.051 "trtype": "TCP", 00:14:07.051 "adrfam": "IPv4", 00:14:07.051 "traddr": "10.0.0.3", 00:14:07.051 "trsvcid": "4420" 00:14:07.051 }, 00:14:07.051 "peer_address": { 00:14:07.051 "trtype": "TCP", 00:14:07.051 "adrfam": "IPv4", 00:14:07.051 "traddr": "10.0.0.1", 00:14:07.051 "trsvcid": "55920" 00:14:07.051 }, 00:14:07.051 "auth": { 00:14:07.051 "state": "completed", 00:14:07.051 "digest": "sha512", 00:14:07.051 "dhgroup": "null" 00:14:07.051 } 00:14:07.051 } 00:14:07.051 ]' 00:14:07.051 19:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:07.051 19:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:07.051 19:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:07.051 19:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:07.051 19:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:07.309 19:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:07.309 19:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:07.309 19:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:07.567 19:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2NhMzM5NWJkOTVlZGMwNzdjNjQ2NzBjMGJjOWU2ZGXFdSFf: --dhchap-ctrl-secret DHHC-1:02:NzRiNjdiOGZmZDQ1YjhiYzY2NGQ1NDE4OTkyMWVlNTgxZTNhNzI2OWVmNTc1OTJmPstlbA==: 00:14:07.567 19:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --hostid bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -l 0 --dhchap-secret DHHC-1:01:Y2NhMzM5NWJkOTVlZGMwNzdjNjQ2NzBjMGJjOWU2ZGXFdSFf: --dhchap-ctrl-secret DHHC-1:02:NzRiNjdiOGZmZDQ1YjhiYzY2NGQ1NDE4OTkyMWVlNTgxZTNhNzI2OWVmNTc1OTJmPstlbA==: 00:14:08.135 19:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:08.135 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:08.135 19:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:14:08.135 19:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.135 19:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.135 19:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.135 19:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:08.135 19:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:08.135 19:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:08.394 19:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:14:08.394 19:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:08.394 19:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:08.394 19:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:08.394 19:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:08.394 19:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:08.394 19:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:08.394 19:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.394 19:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.394 19:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.394 19:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:08.394 19:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:08.394 19:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:08.963 00:14:08.963 19:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:08.963 19:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:08.963 19:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:09.226 19:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:09.226 19:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:09.226 19:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.226 19:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.226 19:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.226 19:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:09.226 { 00:14:09.226 "cntlid": 101, 00:14:09.226 "qid": 0, 00:14:09.226 "state": "enabled", 00:14:09.226 "thread": "nvmf_tgt_poll_group_000", 00:14:09.226 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1", 00:14:09.226 "listen_address": { 00:14:09.226 "trtype": "TCP", 00:14:09.226 "adrfam": "IPv4", 00:14:09.226 "traddr": "10.0.0.3", 00:14:09.226 "trsvcid": "4420" 00:14:09.226 }, 00:14:09.226 "peer_address": { 00:14:09.226 "trtype": "TCP", 00:14:09.226 "adrfam": "IPv4", 00:14:09.226 "traddr": "10.0.0.1", 00:14:09.226 "trsvcid": "55938" 00:14:09.226 }, 00:14:09.226 "auth": { 00:14:09.226 "state": "completed", 00:14:09.226 "digest": "sha512", 00:14:09.226 "dhgroup": "null" 00:14:09.226 } 00:14:09.226 } 00:14:09.226 ]' 00:14:09.226 19:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:09.226 19:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:09.226 19:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:09.226 19:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:09.226 19:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:09.226 19:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:09.226 19:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:09.226 19:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:09.484 19:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmZhZTI5NTZjYjRkZjY4YzI3Zjc3MmYzNDRjNDMwNjViNGFmNGRmOGRiZjQ2N2FhSaZUaA==: --dhchap-ctrl-secret DHHC-1:01:NDYzOGIxMTVjM2QwYmNmZjViNDU0MzZhYzYxZjQxOGEW+wVW: 00:14:09.484 19:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --hostid bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -l 0 --dhchap-secret DHHC-1:02:ZmZhZTI5NTZjYjRkZjY4YzI3Zjc3MmYzNDRjNDMwNjViNGFmNGRmOGRiZjQ2N2FhSaZUaA==: --dhchap-ctrl-secret DHHC-1:01:NDYzOGIxMTVjM2QwYmNmZjViNDU0MzZhYzYxZjQxOGEW+wVW: 00:14:10.420 19:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:10.420 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:10.420 19:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:14:10.420 19:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.420 19:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.420 19:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.420 19:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:10.420 19:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:10.420 19:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:10.679 19:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:14:10.679 19:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:10.679 19:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:10.679 19:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:10.679 19:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:10.679 19:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:10.679 19:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --dhchap-key key3 00:14:10.679 19:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.679 19:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.679 19:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.679 19:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:10.679 19:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:10.679 19:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:10.937 00:14:10.937 19:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:10.937 19:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:10.937 19:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:11.195 19:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:11.195 19:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:11.195 19:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.195 19:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.195 19:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.195 19:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:11.195 { 00:14:11.195 "cntlid": 103, 00:14:11.195 "qid": 0, 00:14:11.195 "state": "enabled", 00:14:11.195 "thread": "nvmf_tgt_poll_group_000", 00:14:11.195 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1", 00:14:11.195 "listen_address": { 00:14:11.195 "trtype": "TCP", 00:14:11.195 "adrfam": "IPv4", 00:14:11.195 "traddr": "10.0.0.3", 00:14:11.195 "trsvcid": "4420" 00:14:11.195 }, 00:14:11.195 "peer_address": { 00:14:11.195 "trtype": "TCP", 00:14:11.195 "adrfam": "IPv4", 00:14:11.195 "traddr": "10.0.0.1", 00:14:11.195 "trsvcid": "55962" 00:14:11.195 }, 00:14:11.195 "auth": { 00:14:11.195 "state": "completed", 00:14:11.195 "digest": "sha512", 00:14:11.195 "dhgroup": "null" 00:14:11.195 } 00:14:11.195 } 00:14:11.195 ]' 00:14:11.195 19:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:11.195 19:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:11.195 19:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:11.195 19:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:11.195 19:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:11.195 19:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:11.195 19:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:11.195 19:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:11.776 19:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2FkMGI5YmI5ODdjZDEzZmU2YTM5NzNmNDg1NzdiYTVjOWQ1MmVjNzNmMmNlNzM5YmY0OGNiMjE3NjVkOTc1YXoxhJw=: 00:14:11.776 19:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --hostid bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -l 0 --dhchap-secret DHHC-1:03:M2FkMGI5YmI5ODdjZDEzZmU2YTM5NzNmNDg1NzdiYTVjOWQ1MmVjNzNmMmNlNzM5YmY0OGNiMjE3NjVkOTc1YXoxhJw=: 00:14:12.342 19:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:12.342 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:12.342 19:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:14:12.342 19:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.342 19:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.343 19:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.343 19:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:12.343 19:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:12.343 19:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:12.343 19:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:12.343 19:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:14:12.343 19:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:12.343 19:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:12.343 19:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:12.343 19:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:12.343 19:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:12.343 19:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:12.343 19:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.343 19:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.343 19:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.343 19:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:12.343 19:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:12.343 19:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:12.910 00:14:12.910 19:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:12.910 19:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:12.910 19:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:13.168 19:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:13.168 19:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:13.168 19:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.168 19:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.168 19:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.168 19:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:13.168 { 00:14:13.168 "cntlid": 105, 00:14:13.168 "qid": 0, 00:14:13.168 "state": "enabled", 00:14:13.168 "thread": "nvmf_tgt_poll_group_000", 00:14:13.168 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1", 00:14:13.168 "listen_address": { 00:14:13.168 "trtype": "TCP", 00:14:13.168 "adrfam": "IPv4", 00:14:13.168 "traddr": "10.0.0.3", 00:14:13.168 "trsvcid": "4420" 00:14:13.168 }, 00:14:13.168 "peer_address": { 00:14:13.168 "trtype": "TCP", 00:14:13.168 "adrfam": "IPv4", 00:14:13.168 "traddr": "10.0.0.1", 00:14:13.168 "trsvcid": "55984" 00:14:13.168 }, 00:14:13.168 "auth": { 00:14:13.168 "state": "completed", 00:14:13.168 "digest": "sha512", 00:14:13.168 "dhgroup": "ffdhe2048" 00:14:13.168 } 00:14:13.168 } 00:14:13.168 ]' 00:14:13.168 19:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:13.168 19:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:13.168 19:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:13.168 19:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:13.168 19:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:13.168 19:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:13.168 19:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:13.168 19:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:13.426 19:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDJhOWIwMDQyMmY0NjgxYzU0ZTY5ZDU0YmNhNjcwZGRjMDFhMzViOWQ5YTQwOGVieehMWQ==: --dhchap-ctrl-secret DHHC-1:03:NzcxZjJjNDBlMWFlMDIxNmE2OGM0OWVjNjc5ZTIwMjdhNjVkNjgzMmNjNWM3NGQxMDIwMTQzYWFlZGYzNWM5OMYeuPs=: 00:14:13.426 19:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --hostid bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -l 0 --dhchap-secret DHHC-1:00:ZDJhOWIwMDQyMmY0NjgxYzU0ZTY5ZDU0YmNhNjcwZGRjMDFhMzViOWQ5YTQwOGVieehMWQ==: --dhchap-ctrl-secret DHHC-1:03:NzcxZjJjNDBlMWFlMDIxNmE2OGM0OWVjNjc5ZTIwMjdhNjVkNjgzMmNjNWM3NGQxMDIwMTQzYWFlZGYzNWM5OMYeuPs=: 00:14:13.993 19:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:13.993 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:14.252 19:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:14:14.252 19:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.252 19:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.252 19:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.252 19:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:14.252 19:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:14.252 19:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:14.511 19:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:14:14.511 19:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:14.511 19:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:14.511 19:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:14.511 19:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:14.511 19:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:14.511 19:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:14.511 19:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.511 19:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.511 19:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.511 19:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:14.511 19:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:14.511 19:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:14.769 00:14:14.769 19:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:14.769 19:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:14.769 19:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:15.029 19:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:15.029 19:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:15.029 19:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.029 19:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.029 19:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.029 19:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:15.029 { 00:14:15.029 "cntlid": 107, 00:14:15.029 "qid": 0, 00:14:15.029 "state": "enabled", 00:14:15.029 "thread": "nvmf_tgt_poll_group_000", 00:14:15.029 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1", 00:14:15.029 "listen_address": { 00:14:15.029 "trtype": "TCP", 00:14:15.029 "adrfam": "IPv4", 00:14:15.029 "traddr": "10.0.0.3", 00:14:15.029 "trsvcid": "4420" 00:14:15.029 }, 00:14:15.029 "peer_address": { 00:14:15.029 "trtype": "TCP", 00:14:15.029 "adrfam": "IPv4", 00:14:15.029 "traddr": "10.0.0.1", 00:14:15.029 "trsvcid": "56006" 00:14:15.029 }, 00:14:15.029 "auth": { 00:14:15.029 "state": "completed", 00:14:15.029 "digest": "sha512", 00:14:15.029 "dhgroup": "ffdhe2048" 00:14:15.029 } 00:14:15.029 } 00:14:15.029 ]' 00:14:15.029 19:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:15.029 19:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:15.029 19:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:15.029 19:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:15.029 19:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:15.289 19:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:15.289 19:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:15.289 19:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:15.547 19:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2NhMzM5NWJkOTVlZGMwNzdjNjQ2NzBjMGJjOWU2ZGXFdSFf: --dhchap-ctrl-secret DHHC-1:02:NzRiNjdiOGZmZDQ1YjhiYzY2NGQ1NDE4OTkyMWVlNTgxZTNhNzI2OWVmNTc1OTJmPstlbA==: 00:14:15.547 19:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --hostid bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -l 0 --dhchap-secret DHHC-1:01:Y2NhMzM5NWJkOTVlZGMwNzdjNjQ2NzBjMGJjOWU2ZGXFdSFf: --dhchap-ctrl-secret DHHC-1:02:NzRiNjdiOGZmZDQ1YjhiYzY2NGQ1NDE4OTkyMWVlNTgxZTNhNzI2OWVmNTc1OTJmPstlbA==: 00:14:16.114 19:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:16.114 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:16.114 19:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:14:16.114 19:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.114 19:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.114 19:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.114 19:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:16.114 19:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:16.114 19:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:16.372 19:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:14:16.372 19:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:16.372 19:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:16.372 19:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:16.372 19:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:16.372 19:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:16.372 19:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:16.372 19:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.372 19:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.372 19:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.372 19:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:16.372 19:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:16.372 19:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:16.631 00:14:16.631 19:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:16.631 19:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:16.631 19:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:16.890 19:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:16.890 19:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:16.890 19:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.890 19:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.890 19:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.890 19:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:16.890 { 00:14:16.890 "cntlid": 109, 00:14:16.890 "qid": 0, 00:14:16.890 "state": "enabled", 00:14:16.890 "thread": "nvmf_tgt_poll_group_000", 00:14:16.890 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1", 00:14:16.890 "listen_address": { 00:14:16.890 "trtype": "TCP", 00:14:16.890 "adrfam": "IPv4", 00:14:16.890 "traddr": "10.0.0.3", 00:14:16.890 "trsvcid": "4420" 00:14:16.890 }, 00:14:16.890 "peer_address": { 00:14:16.890 "trtype": "TCP", 00:14:16.890 "adrfam": "IPv4", 00:14:16.890 "traddr": "10.0.0.1", 00:14:16.890 "trsvcid": "46266" 00:14:16.890 }, 00:14:16.890 "auth": { 00:14:16.890 "state": "completed", 00:14:16.890 "digest": "sha512", 00:14:16.890 "dhgroup": "ffdhe2048" 00:14:16.890 } 00:14:16.890 } 00:14:16.890 ]' 00:14:16.890 19:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:17.149 19:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:17.149 19:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:17.149 19:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:17.149 19:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:17.149 19:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:17.149 19:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:17.149 19:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:17.407 19:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmZhZTI5NTZjYjRkZjY4YzI3Zjc3MmYzNDRjNDMwNjViNGFmNGRmOGRiZjQ2N2FhSaZUaA==: --dhchap-ctrl-secret DHHC-1:01:NDYzOGIxMTVjM2QwYmNmZjViNDU0MzZhYzYxZjQxOGEW+wVW: 00:14:17.407 19:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --hostid bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -l 0 --dhchap-secret DHHC-1:02:ZmZhZTI5NTZjYjRkZjY4YzI3Zjc3MmYzNDRjNDMwNjViNGFmNGRmOGRiZjQ2N2FhSaZUaA==: --dhchap-ctrl-secret DHHC-1:01:NDYzOGIxMTVjM2QwYmNmZjViNDU0MzZhYzYxZjQxOGEW+wVW: 00:14:18.036 19:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:18.036 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:18.036 19:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:14:18.036 19:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.036 19:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.036 19:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.036 19:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:18.036 19:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:18.036 19:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:18.293 19:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:14:18.293 19:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:18.293 19:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:18.293 19:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:18.293 19:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:18.293 19:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:18.293 19:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --dhchap-key key3 00:14:18.293 19:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.293 19:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.293 19:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.293 19:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:18.293 19:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:18.293 19:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:18.859 00:14:18.859 19:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:18.859 19:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:18.859 19:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:19.118 19:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:19.118 19:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:19.118 19:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.118 19:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.118 19:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.118 19:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:19.118 { 00:14:19.118 "cntlid": 111, 00:14:19.118 "qid": 0, 00:14:19.118 "state": "enabled", 00:14:19.118 "thread": "nvmf_tgt_poll_group_000", 00:14:19.118 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1", 00:14:19.118 "listen_address": { 00:14:19.118 "trtype": "TCP", 00:14:19.118 "adrfam": "IPv4", 00:14:19.118 "traddr": "10.0.0.3", 00:14:19.118 "trsvcid": "4420" 00:14:19.118 }, 00:14:19.118 "peer_address": { 00:14:19.118 "trtype": "TCP", 00:14:19.118 "adrfam": "IPv4", 00:14:19.118 "traddr": "10.0.0.1", 00:14:19.118 "trsvcid": "46280" 00:14:19.118 }, 00:14:19.118 "auth": { 00:14:19.118 "state": "completed", 00:14:19.118 "digest": "sha512", 00:14:19.118 "dhgroup": "ffdhe2048" 00:14:19.118 } 00:14:19.118 } 00:14:19.118 ]' 00:14:19.118 19:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:19.118 19:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:19.118 19:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:19.118 19:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:19.118 19:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:19.118 19:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:19.118 19:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:19.118 19:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:19.376 19:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2FkMGI5YmI5ODdjZDEzZmU2YTM5NzNmNDg1NzdiYTVjOWQ1MmVjNzNmMmNlNzM5YmY0OGNiMjE3NjVkOTc1YXoxhJw=: 00:14:19.376 19:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --hostid bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -l 0 --dhchap-secret DHHC-1:03:M2FkMGI5YmI5ODdjZDEzZmU2YTM5NzNmNDg1NzdiYTVjOWQ1MmVjNzNmMmNlNzM5YmY0OGNiMjE3NjVkOTc1YXoxhJw=: 00:14:19.944 19:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:19.944 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:19.944 19:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:14:19.944 19:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.944 19:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.202 19:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.202 19:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:20.202 19:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:20.202 19:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:20.202 19:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:20.202 19:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:14:20.202 19:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:20.202 19:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:20.202 19:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:20.202 19:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:20.202 19:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:20.202 19:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:20.202 19:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.202 19:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.202 19:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.202 19:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:20.203 19:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:20.203 19:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:20.771 00:14:20.771 19:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:20.771 19:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:20.771 19:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:21.029 19:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:21.029 19:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:21.029 19:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.029 19:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.029 19:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.029 19:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:21.029 { 00:14:21.029 "cntlid": 113, 00:14:21.030 "qid": 0, 00:14:21.030 "state": "enabled", 00:14:21.030 "thread": "nvmf_tgt_poll_group_000", 00:14:21.030 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1", 00:14:21.030 "listen_address": { 00:14:21.030 "trtype": "TCP", 00:14:21.030 "adrfam": "IPv4", 00:14:21.030 "traddr": "10.0.0.3", 00:14:21.030 "trsvcid": "4420" 00:14:21.030 }, 00:14:21.030 "peer_address": { 00:14:21.030 "trtype": "TCP", 00:14:21.030 "adrfam": "IPv4", 00:14:21.030 "traddr": "10.0.0.1", 00:14:21.030 "trsvcid": "46314" 00:14:21.030 }, 00:14:21.030 "auth": { 00:14:21.030 "state": "completed", 00:14:21.030 "digest": "sha512", 00:14:21.030 "dhgroup": "ffdhe3072" 00:14:21.030 } 00:14:21.030 } 00:14:21.030 ]' 00:14:21.030 19:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:21.030 19:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:21.030 19:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:21.030 19:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:21.030 19:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:21.030 19:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:21.030 19:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:21.030 19:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:21.597 19:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDJhOWIwMDQyMmY0NjgxYzU0ZTY5ZDU0YmNhNjcwZGRjMDFhMzViOWQ5YTQwOGVieehMWQ==: --dhchap-ctrl-secret DHHC-1:03:NzcxZjJjNDBlMWFlMDIxNmE2OGM0OWVjNjc5ZTIwMjdhNjVkNjgzMmNjNWM3NGQxMDIwMTQzYWFlZGYzNWM5OMYeuPs=: 00:14:21.597 19:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --hostid bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -l 0 --dhchap-secret DHHC-1:00:ZDJhOWIwMDQyMmY0NjgxYzU0ZTY5ZDU0YmNhNjcwZGRjMDFhMzViOWQ5YTQwOGVieehMWQ==: --dhchap-ctrl-secret DHHC-1:03:NzcxZjJjNDBlMWFlMDIxNmE2OGM0OWVjNjc5ZTIwMjdhNjVkNjgzMmNjNWM3NGQxMDIwMTQzYWFlZGYzNWM5OMYeuPs=: 00:14:22.167 19:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:22.167 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:22.167 19:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:14:22.167 19:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.167 19:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.167 19:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.167 19:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:22.167 19:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:22.167 19:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:22.425 19:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:14:22.425 19:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:22.425 19:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:22.425 19:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:22.425 19:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:22.425 19:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:22.425 19:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:22.425 19:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.425 19:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.425 19:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.425 19:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:22.425 19:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:22.425 19:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:22.991 00:14:22.991 19:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:22.991 19:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:22.991 19:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:22.991 19:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:22.991 19:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:22.991 19:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.991 19:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.250 19:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.250 19:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:23.250 { 00:14:23.250 "cntlid": 115, 00:14:23.250 "qid": 0, 00:14:23.250 "state": "enabled", 00:14:23.250 "thread": "nvmf_tgt_poll_group_000", 00:14:23.250 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1", 00:14:23.250 "listen_address": { 00:14:23.250 "trtype": "TCP", 00:14:23.250 "adrfam": "IPv4", 00:14:23.250 "traddr": "10.0.0.3", 00:14:23.250 "trsvcid": "4420" 00:14:23.250 }, 00:14:23.250 "peer_address": { 00:14:23.250 "trtype": "TCP", 00:14:23.250 "adrfam": "IPv4", 00:14:23.250 "traddr": "10.0.0.1", 00:14:23.250 "trsvcid": "46348" 00:14:23.250 }, 00:14:23.250 "auth": { 00:14:23.250 "state": "completed", 00:14:23.250 "digest": "sha512", 00:14:23.250 "dhgroup": "ffdhe3072" 00:14:23.250 } 00:14:23.250 } 00:14:23.250 ]' 00:14:23.250 19:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:23.250 19:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:23.250 19:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:23.250 19:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:23.250 19:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:23.250 19:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:23.250 19:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:23.250 19:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:23.516 19:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2NhMzM5NWJkOTVlZGMwNzdjNjQ2NzBjMGJjOWU2ZGXFdSFf: --dhchap-ctrl-secret DHHC-1:02:NzRiNjdiOGZmZDQ1YjhiYzY2NGQ1NDE4OTkyMWVlNTgxZTNhNzI2OWVmNTc1OTJmPstlbA==: 00:14:23.516 19:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --hostid bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -l 0 --dhchap-secret DHHC-1:01:Y2NhMzM5NWJkOTVlZGMwNzdjNjQ2NzBjMGJjOWU2ZGXFdSFf: --dhchap-ctrl-secret DHHC-1:02:NzRiNjdiOGZmZDQ1YjhiYzY2NGQ1NDE4OTkyMWVlNTgxZTNhNzI2OWVmNTc1OTJmPstlbA==: 00:14:24.094 19:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:24.094 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:24.094 19:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:14:24.094 19:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.094 19:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.094 19:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.094 19:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:24.094 19:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:24.094 19:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:24.353 19:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:14:24.353 19:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:24.353 19:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:24.353 19:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:24.353 19:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:24.353 19:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:24.353 19:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:24.353 19:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.353 19:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.353 19:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.353 19:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:24.353 19:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:24.353 19:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:24.922 00:14:24.922 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:24.922 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:24.922 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:25.180 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:25.180 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:25.180 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.180 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.180 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.180 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:25.180 { 00:14:25.180 "cntlid": 117, 00:14:25.180 "qid": 0, 00:14:25.180 "state": "enabled", 00:14:25.181 "thread": "nvmf_tgt_poll_group_000", 00:14:25.181 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1", 00:14:25.181 "listen_address": { 00:14:25.181 "trtype": "TCP", 00:14:25.181 "adrfam": "IPv4", 00:14:25.181 "traddr": "10.0.0.3", 00:14:25.181 "trsvcid": "4420" 00:14:25.181 }, 00:14:25.181 "peer_address": { 00:14:25.181 "trtype": "TCP", 00:14:25.181 "adrfam": "IPv4", 00:14:25.181 "traddr": "10.0.0.1", 00:14:25.181 "trsvcid": "46380" 00:14:25.181 }, 00:14:25.181 "auth": { 00:14:25.181 "state": "completed", 00:14:25.181 "digest": "sha512", 00:14:25.181 "dhgroup": "ffdhe3072" 00:14:25.181 } 00:14:25.181 } 00:14:25.181 ]' 00:14:25.181 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:25.181 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:25.181 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:25.181 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:25.181 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:25.181 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:25.181 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:25.181 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:25.440 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmZhZTI5NTZjYjRkZjY4YzI3Zjc3MmYzNDRjNDMwNjViNGFmNGRmOGRiZjQ2N2FhSaZUaA==: --dhchap-ctrl-secret DHHC-1:01:NDYzOGIxMTVjM2QwYmNmZjViNDU0MzZhYzYxZjQxOGEW+wVW: 00:14:25.440 19:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --hostid bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -l 0 --dhchap-secret DHHC-1:02:ZmZhZTI5NTZjYjRkZjY4YzI3Zjc3MmYzNDRjNDMwNjViNGFmNGRmOGRiZjQ2N2FhSaZUaA==: --dhchap-ctrl-secret DHHC-1:01:NDYzOGIxMTVjM2QwYmNmZjViNDU0MzZhYzYxZjQxOGEW+wVW: 00:14:26.007 19:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:26.007 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:26.007 19:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:14:26.007 19:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.007 19:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.007 19:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.007 19:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:26.007 19:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:26.007 19:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:26.267 19:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:14:26.267 19:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:26.267 19:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:26.267 19:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:26.267 19:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:26.267 19:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:26.267 19:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --dhchap-key key3 00:14:26.267 19:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.267 19:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.267 19:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.267 19:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:26.267 19:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:26.267 19:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:26.833 00:14:26.833 19:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:26.833 19:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:26.833 19:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:27.093 19:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:27.093 19:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:27.093 19:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.093 19:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.093 19:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.093 19:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:27.093 { 00:14:27.093 "cntlid": 119, 00:14:27.093 "qid": 0, 00:14:27.093 "state": "enabled", 00:14:27.093 "thread": "nvmf_tgt_poll_group_000", 00:14:27.093 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1", 00:14:27.093 "listen_address": { 00:14:27.093 "trtype": "TCP", 00:14:27.093 "adrfam": "IPv4", 00:14:27.093 "traddr": "10.0.0.3", 00:14:27.093 "trsvcid": "4420" 00:14:27.093 }, 00:14:27.093 "peer_address": { 00:14:27.093 "trtype": "TCP", 00:14:27.093 "adrfam": "IPv4", 00:14:27.093 "traddr": "10.0.0.1", 00:14:27.093 "trsvcid": "47568" 00:14:27.093 }, 00:14:27.093 "auth": { 00:14:27.093 "state": "completed", 00:14:27.093 "digest": "sha512", 00:14:27.093 "dhgroup": "ffdhe3072" 00:14:27.093 } 00:14:27.093 } 00:14:27.093 ]' 00:14:27.093 19:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:27.093 19:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:27.093 19:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:27.093 19:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:27.093 19:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:27.093 19:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:27.093 19:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:27.093 19:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:27.353 19:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2FkMGI5YmI5ODdjZDEzZmU2YTM5NzNmNDg1NzdiYTVjOWQ1MmVjNzNmMmNlNzM5YmY0OGNiMjE3NjVkOTc1YXoxhJw=: 00:14:27.353 19:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --hostid bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -l 0 --dhchap-secret DHHC-1:03:M2FkMGI5YmI5ODdjZDEzZmU2YTM5NzNmNDg1NzdiYTVjOWQ1MmVjNzNmMmNlNzM5YmY0OGNiMjE3NjVkOTc1YXoxhJw=: 00:14:28.289 19:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:28.289 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:28.289 19:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:14:28.289 19:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.289 19:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.289 19:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.289 19:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:28.289 19:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:28.289 19:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:28.289 19:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:28.289 19:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:14:28.289 19:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:28.289 19:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:28.289 19:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:28.289 19:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:28.289 19:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:28.289 19:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:28.289 19:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.289 19:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.549 19:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.549 19:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:28.549 19:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:28.549 19:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:28.809 00:14:28.809 19:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:28.809 19:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:28.809 19:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:29.068 19:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:29.068 19:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:29.068 19:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.068 19:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.068 19:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.068 19:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:29.068 { 00:14:29.068 "cntlid": 121, 00:14:29.068 "qid": 0, 00:14:29.068 "state": "enabled", 00:14:29.068 "thread": "nvmf_tgt_poll_group_000", 00:14:29.068 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1", 00:14:29.068 "listen_address": { 00:14:29.068 "trtype": "TCP", 00:14:29.068 "adrfam": "IPv4", 00:14:29.068 "traddr": "10.0.0.3", 00:14:29.068 "trsvcid": "4420" 00:14:29.068 }, 00:14:29.068 "peer_address": { 00:14:29.068 "trtype": "TCP", 00:14:29.068 "adrfam": "IPv4", 00:14:29.068 "traddr": "10.0.0.1", 00:14:29.068 "trsvcid": "47584" 00:14:29.068 }, 00:14:29.068 "auth": { 00:14:29.068 "state": "completed", 00:14:29.068 "digest": "sha512", 00:14:29.068 "dhgroup": "ffdhe4096" 00:14:29.068 } 00:14:29.068 } 00:14:29.068 ]' 00:14:29.068 19:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:29.068 19:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:29.068 19:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:29.327 19:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:29.327 19:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:29.327 19:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:29.327 19:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:29.327 19:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:29.646 19:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDJhOWIwMDQyMmY0NjgxYzU0ZTY5ZDU0YmNhNjcwZGRjMDFhMzViOWQ5YTQwOGVieehMWQ==: --dhchap-ctrl-secret DHHC-1:03:NzcxZjJjNDBlMWFlMDIxNmE2OGM0OWVjNjc5ZTIwMjdhNjVkNjgzMmNjNWM3NGQxMDIwMTQzYWFlZGYzNWM5OMYeuPs=: 00:14:29.646 19:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --hostid bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -l 0 --dhchap-secret DHHC-1:00:ZDJhOWIwMDQyMmY0NjgxYzU0ZTY5ZDU0YmNhNjcwZGRjMDFhMzViOWQ5YTQwOGVieehMWQ==: --dhchap-ctrl-secret DHHC-1:03:NzcxZjJjNDBlMWFlMDIxNmE2OGM0OWVjNjc5ZTIwMjdhNjVkNjgzMmNjNWM3NGQxMDIwMTQzYWFlZGYzNWM5OMYeuPs=: 00:14:30.231 19:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:30.231 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:30.231 19:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:14:30.231 19:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.231 19:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.231 19:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.231 19:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:30.231 19:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:30.231 19:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:30.490 19:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:14:30.490 19:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:30.490 19:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:30.490 19:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:30.490 19:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:30.490 19:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:30.490 19:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:30.490 19:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.490 19:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.490 19:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.490 19:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:30.490 19:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:30.490 19:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:30.749 00:14:30.749 19:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:30.749 19:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:30.749 19:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:31.008 19:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:31.008 19:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:31.008 19:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.008 19:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.008 19:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.008 19:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:31.008 { 00:14:31.008 "cntlid": 123, 00:14:31.008 "qid": 0, 00:14:31.008 "state": "enabled", 00:14:31.008 "thread": "nvmf_tgt_poll_group_000", 00:14:31.008 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1", 00:14:31.008 "listen_address": { 00:14:31.008 "trtype": "TCP", 00:14:31.008 "adrfam": "IPv4", 00:14:31.008 "traddr": "10.0.0.3", 00:14:31.008 "trsvcid": "4420" 00:14:31.008 }, 00:14:31.008 "peer_address": { 00:14:31.008 "trtype": "TCP", 00:14:31.008 "adrfam": "IPv4", 00:14:31.008 "traddr": "10.0.0.1", 00:14:31.008 "trsvcid": "47622" 00:14:31.008 }, 00:14:31.008 "auth": { 00:14:31.008 "state": "completed", 00:14:31.008 "digest": "sha512", 00:14:31.008 "dhgroup": "ffdhe4096" 00:14:31.008 } 00:14:31.008 } 00:14:31.008 ]' 00:14:31.008 19:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:31.268 19:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:31.268 19:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:31.268 19:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:31.268 19:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:31.268 19:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:31.268 19:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:31.268 19:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:31.527 19:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2NhMzM5NWJkOTVlZGMwNzdjNjQ2NzBjMGJjOWU2ZGXFdSFf: --dhchap-ctrl-secret DHHC-1:02:NzRiNjdiOGZmZDQ1YjhiYzY2NGQ1NDE4OTkyMWVlNTgxZTNhNzI2OWVmNTc1OTJmPstlbA==: 00:14:31.527 19:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --hostid bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -l 0 --dhchap-secret DHHC-1:01:Y2NhMzM5NWJkOTVlZGMwNzdjNjQ2NzBjMGJjOWU2ZGXFdSFf: --dhchap-ctrl-secret DHHC-1:02:NzRiNjdiOGZmZDQ1YjhiYzY2NGQ1NDE4OTkyMWVlNTgxZTNhNzI2OWVmNTc1OTJmPstlbA==: 00:14:32.096 19:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:32.096 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:32.096 19:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:14:32.096 19:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.096 19:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.096 19:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.096 19:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:32.096 19:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:32.096 19:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:32.355 19:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:14:32.355 19:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:32.356 19:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:32.356 19:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:32.356 19:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:32.356 19:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:32.356 19:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:32.356 19:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.356 19:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.356 19:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.356 19:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:32.356 19:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:32.356 19:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:32.923 00:14:32.923 19:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:32.923 19:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:32.923 19:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:33.182 19:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:33.182 19:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:33.182 19:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.182 19:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.182 19:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.182 19:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:33.182 { 00:14:33.182 "cntlid": 125, 00:14:33.182 "qid": 0, 00:14:33.182 "state": "enabled", 00:14:33.182 "thread": "nvmf_tgt_poll_group_000", 00:14:33.182 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1", 00:14:33.182 "listen_address": { 00:14:33.182 "trtype": "TCP", 00:14:33.182 "adrfam": "IPv4", 00:14:33.182 "traddr": "10.0.0.3", 00:14:33.182 "trsvcid": "4420" 00:14:33.182 }, 00:14:33.182 "peer_address": { 00:14:33.182 "trtype": "TCP", 00:14:33.182 "adrfam": "IPv4", 00:14:33.182 "traddr": "10.0.0.1", 00:14:33.182 "trsvcid": "47648" 00:14:33.182 }, 00:14:33.182 "auth": { 00:14:33.182 "state": "completed", 00:14:33.182 "digest": "sha512", 00:14:33.182 "dhgroup": "ffdhe4096" 00:14:33.182 } 00:14:33.182 } 00:14:33.182 ]' 00:14:33.182 19:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:33.182 19:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:33.182 19:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:33.182 19:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:33.182 19:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:33.182 19:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:33.182 19:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:33.182 19:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:33.751 19:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmZhZTI5NTZjYjRkZjY4YzI3Zjc3MmYzNDRjNDMwNjViNGFmNGRmOGRiZjQ2N2FhSaZUaA==: --dhchap-ctrl-secret DHHC-1:01:NDYzOGIxMTVjM2QwYmNmZjViNDU0MzZhYzYxZjQxOGEW+wVW: 00:14:33.751 19:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --hostid bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -l 0 --dhchap-secret DHHC-1:02:ZmZhZTI5NTZjYjRkZjY4YzI3Zjc3MmYzNDRjNDMwNjViNGFmNGRmOGRiZjQ2N2FhSaZUaA==: --dhchap-ctrl-secret DHHC-1:01:NDYzOGIxMTVjM2QwYmNmZjViNDU0MzZhYzYxZjQxOGEW+wVW: 00:14:34.320 19:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:34.320 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:34.320 19:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:14:34.320 19:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.320 19:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.320 19:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.320 19:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:34.320 19:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:34.320 19:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:34.579 19:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:14:34.579 19:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:34.579 19:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:34.579 19:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:34.579 19:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:34.579 19:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:34.579 19:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --dhchap-key key3 00:14:34.579 19:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.579 19:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.579 19:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.579 19:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:34.579 19:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:34.579 19:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:34.837 00:14:35.096 19:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:35.096 19:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:35.096 19:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:35.096 19:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:35.096 19:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:35.096 19:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.096 19:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.354 19:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.354 19:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:35.354 { 00:14:35.354 "cntlid": 127, 00:14:35.354 "qid": 0, 00:14:35.354 "state": "enabled", 00:14:35.354 "thread": "nvmf_tgt_poll_group_000", 00:14:35.354 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1", 00:14:35.354 "listen_address": { 00:14:35.354 "trtype": "TCP", 00:14:35.354 "adrfam": "IPv4", 00:14:35.354 "traddr": "10.0.0.3", 00:14:35.354 "trsvcid": "4420" 00:14:35.354 }, 00:14:35.354 "peer_address": { 00:14:35.354 "trtype": "TCP", 00:14:35.354 "adrfam": "IPv4", 00:14:35.354 "traddr": "10.0.0.1", 00:14:35.354 "trsvcid": "47662" 00:14:35.354 }, 00:14:35.354 "auth": { 00:14:35.354 "state": "completed", 00:14:35.354 "digest": "sha512", 00:14:35.354 "dhgroup": "ffdhe4096" 00:14:35.354 } 00:14:35.354 } 00:14:35.354 ]' 00:14:35.354 19:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:35.354 19:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:35.354 19:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:35.354 19:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:35.354 19:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:35.354 19:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:35.354 19:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:35.354 19:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:35.612 19:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2FkMGI5YmI5ODdjZDEzZmU2YTM5NzNmNDg1NzdiYTVjOWQ1MmVjNzNmMmNlNzM5YmY0OGNiMjE3NjVkOTc1YXoxhJw=: 00:14:35.612 19:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --hostid bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -l 0 --dhchap-secret DHHC-1:03:M2FkMGI5YmI5ODdjZDEzZmU2YTM5NzNmNDg1NzdiYTVjOWQ1MmVjNzNmMmNlNzM5YmY0OGNiMjE3NjVkOTc1YXoxhJw=: 00:14:36.207 19:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:36.207 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:36.207 19:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:14:36.207 19:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.207 19:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.207 19:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.207 19:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:36.207 19:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:36.207 19:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:36.207 19:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:36.773 19:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:14:36.773 19:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:36.773 19:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:36.773 19:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:36.773 19:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:36.773 19:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:36.773 19:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:36.773 19:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.774 19:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.774 19:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.774 19:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:36.774 19:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:36.774 19:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:37.032 00:14:37.032 19:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:37.032 19:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:37.032 19:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:37.290 19:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:37.290 19:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:37.290 19:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.290 19:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.290 19:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.290 19:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:37.290 { 00:14:37.290 "cntlid": 129, 00:14:37.290 "qid": 0, 00:14:37.290 "state": "enabled", 00:14:37.290 "thread": "nvmf_tgt_poll_group_000", 00:14:37.290 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1", 00:14:37.290 "listen_address": { 00:14:37.290 "trtype": "TCP", 00:14:37.290 "adrfam": "IPv4", 00:14:37.290 "traddr": "10.0.0.3", 00:14:37.290 "trsvcid": "4420" 00:14:37.290 }, 00:14:37.290 "peer_address": { 00:14:37.290 "trtype": "TCP", 00:14:37.290 "adrfam": "IPv4", 00:14:37.290 "traddr": "10.0.0.1", 00:14:37.290 "trsvcid": "36044" 00:14:37.290 }, 00:14:37.290 "auth": { 00:14:37.290 "state": "completed", 00:14:37.290 "digest": "sha512", 00:14:37.290 "dhgroup": "ffdhe6144" 00:14:37.290 } 00:14:37.290 } 00:14:37.290 ]' 00:14:37.290 19:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:37.290 19:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:37.290 19:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:37.548 19:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:37.548 19:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:37.548 19:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:37.549 19:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:37.549 19:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:37.807 19:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDJhOWIwMDQyMmY0NjgxYzU0ZTY5ZDU0YmNhNjcwZGRjMDFhMzViOWQ5YTQwOGVieehMWQ==: --dhchap-ctrl-secret DHHC-1:03:NzcxZjJjNDBlMWFlMDIxNmE2OGM0OWVjNjc5ZTIwMjdhNjVkNjgzMmNjNWM3NGQxMDIwMTQzYWFlZGYzNWM5OMYeuPs=: 00:14:37.807 19:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --hostid bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -l 0 --dhchap-secret DHHC-1:00:ZDJhOWIwMDQyMmY0NjgxYzU0ZTY5ZDU0YmNhNjcwZGRjMDFhMzViOWQ5YTQwOGVieehMWQ==: --dhchap-ctrl-secret DHHC-1:03:NzcxZjJjNDBlMWFlMDIxNmE2OGM0OWVjNjc5ZTIwMjdhNjVkNjgzMmNjNWM3NGQxMDIwMTQzYWFlZGYzNWM5OMYeuPs=: 00:14:38.374 19:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:38.374 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:38.374 19:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:14:38.374 19:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.374 19:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.374 19:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.374 19:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:38.374 19:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:38.374 19:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:38.633 19:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:14:38.633 19:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:38.633 19:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:38.633 19:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:38.633 19:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:38.633 19:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:38.633 19:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:38.633 19:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.633 19:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.633 19:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.633 19:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:38.633 19:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:38.633 19:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:39.201 00:14:39.201 19:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:39.201 19:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:39.201 19:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:39.459 19:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:39.459 19:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:39.459 19:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.459 19:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.459 19:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.459 19:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:39.459 { 00:14:39.459 "cntlid": 131, 00:14:39.459 "qid": 0, 00:14:39.459 "state": "enabled", 00:14:39.459 "thread": "nvmf_tgt_poll_group_000", 00:14:39.459 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1", 00:14:39.459 "listen_address": { 00:14:39.459 "trtype": "TCP", 00:14:39.459 "adrfam": "IPv4", 00:14:39.459 "traddr": "10.0.0.3", 00:14:39.459 "trsvcid": "4420" 00:14:39.459 }, 00:14:39.459 "peer_address": { 00:14:39.459 "trtype": "TCP", 00:14:39.460 "adrfam": "IPv4", 00:14:39.460 "traddr": "10.0.0.1", 00:14:39.460 "trsvcid": "36074" 00:14:39.460 }, 00:14:39.460 "auth": { 00:14:39.460 "state": "completed", 00:14:39.460 "digest": "sha512", 00:14:39.460 "dhgroup": "ffdhe6144" 00:14:39.460 } 00:14:39.460 } 00:14:39.460 ]' 00:14:39.460 19:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:39.460 19:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:39.460 19:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:39.460 19:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:39.460 19:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:39.460 19:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:39.460 19:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:39.460 19:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:39.720 19:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2NhMzM5NWJkOTVlZGMwNzdjNjQ2NzBjMGJjOWU2ZGXFdSFf: --dhchap-ctrl-secret DHHC-1:02:NzRiNjdiOGZmZDQ1YjhiYzY2NGQ1NDE4OTkyMWVlNTgxZTNhNzI2OWVmNTc1OTJmPstlbA==: 00:14:39.720 19:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --hostid bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -l 0 --dhchap-secret DHHC-1:01:Y2NhMzM5NWJkOTVlZGMwNzdjNjQ2NzBjMGJjOWU2ZGXFdSFf: --dhchap-ctrl-secret DHHC-1:02:NzRiNjdiOGZmZDQ1YjhiYzY2NGQ1NDE4OTkyMWVlNTgxZTNhNzI2OWVmNTc1OTJmPstlbA==: 00:14:40.288 19:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:40.288 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:40.288 19:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:14:40.288 19:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.288 19:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.289 19:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.289 19:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:40.289 19:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:40.289 19:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:40.548 19:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:14:40.548 19:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:40.548 19:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:40.548 19:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:40.548 19:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:40.548 19:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:40.548 19:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:40.548 19:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.548 19:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.548 19:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.548 19:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:40.548 19:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:40.548 19:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:41.116 00:14:41.116 19:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:41.116 19:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:41.116 19:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:41.375 19:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:41.375 19:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:41.376 19:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.376 19:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.376 19:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.376 19:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:41.376 { 00:14:41.376 "cntlid": 133, 00:14:41.376 "qid": 0, 00:14:41.376 "state": "enabled", 00:14:41.376 "thread": "nvmf_tgt_poll_group_000", 00:14:41.376 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1", 00:14:41.376 "listen_address": { 00:14:41.376 "trtype": "TCP", 00:14:41.376 "adrfam": "IPv4", 00:14:41.376 "traddr": "10.0.0.3", 00:14:41.376 "trsvcid": "4420" 00:14:41.376 }, 00:14:41.376 "peer_address": { 00:14:41.376 "trtype": "TCP", 00:14:41.376 "adrfam": "IPv4", 00:14:41.376 "traddr": "10.0.0.1", 00:14:41.376 "trsvcid": "36094" 00:14:41.376 }, 00:14:41.376 "auth": { 00:14:41.376 "state": "completed", 00:14:41.376 "digest": "sha512", 00:14:41.376 "dhgroup": "ffdhe6144" 00:14:41.376 } 00:14:41.376 } 00:14:41.376 ]' 00:14:41.376 19:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:41.376 19:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:41.376 19:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:41.376 19:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:41.376 19:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:41.376 19:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:41.376 19:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:41.376 19:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:41.948 19:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmZhZTI5NTZjYjRkZjY4YzI3Zjc3MmYzNDRjNDMwNjViNGFmNGRmOGRiZjQ2N2FhSaZUaA==: --dhchap-ctrl-secret DHHC-1:01:NDYzOGIxMTVjM2QwYmNmZjViNDU0MzZhYzYxZjQxOGEW+wVW: 00:14:41.948 19:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --hostid bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -l 0 --dhchap-secret DHHC-1:02:ZmZhZTI5NTZjYjRkZjY4YzI3Zjc3MmYzNDRjNDMwNjViNGFmNGRmOGRiZjQ2N2FhSaZUaA==: --dhchap-ctrl-secret DHHC-1:01:NDYzOGIxMTVjM2QwYmNmZjViNDU0MzZhYzYxZjQxOGEW+wVW: 00:14:42.516 19:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:42.516 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:42.516 19:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:14:42.516 19:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.516 19:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.516 19:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.516 19:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:42.516 19:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:42.516 19:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:42.788 19:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:14:42.788 19:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:42.789 19:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:42.789 19:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:42.789 19:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:42.789 19:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:42.789 19:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --dhchap-key key3 00:14:42.789 19:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.789 19:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.789 19:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.789 19:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:42.789 19:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:42.789 19:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:43.057 00:14:43.057 19:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:43.057 19:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:43.057 19:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:43.316 19:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:43.316 19:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:43.316 19:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.316 19:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.316 19:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.316 19:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:43.316 { 00:14:43.316 "cntlid": 135, 00:14:43.316 "qid": 0, 00:14:43.316 "state": "enabled", 00:14:43.316 "thread": "nvmf_tgt_poll_group_000", 00:14:43.316 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1", 00:14:43.316 "listen_address": { 00:14:43.316 "trtype": "TCP", 00:14:43.316 "adrfam": "IPv4", 00:14:43.316 "traddr": "10.0.0.3", 00:14:43.316 "trsvcid": "4420" 00:14:43.316 }, 00:14:43.316 "peer_address": { 00:14:43.316 "trtype": "TCP", 00:14:43.316 "adrfam": "IPv4", 00:14:43.316 "traddr": "10.0.0.1", 00:14:43.316 "trsvcid": "36108" 00:14:43.316 }, 00:14:43.316 "auth": { 00:14:43.316 "state": "completed", 00:14:43.316 "digest": "sha512", 00:14:43.316 "dhgroup": "ffdhe6144" 00:14:43.316 } 00:14:43.316 } 00:14:43.316 ]' 00:14:43.316 19:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:43.575 19:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:43.575 19:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:43.575 19:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:43.575 19:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:43.575 19:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:43.575 19:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:43.575 19:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:43.834 19:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2FkMGI5YmI5ODdjZDEzZmU2YTM5NzNmNDg1NzdiYTVjOWQ1MmVjNzNmMmNlNzM5YmY0OGNiMjE3NjVkOTc1YXoxhJw=: 00:14:43.834 19:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --hostid bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -l 0 --dhchap-secret DHHC-1:03:M2FkMGI5YmI5ODdjZDEzZmU2YTM5NzNmNDg1NzdiYTVjOWQ1MmVjNzNmMmNlNzM5YmY0OGNiMjE3NjVkOTc1YXoxhJw=: 00:14:44.402 19:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:44.402 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:44.402 19:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:14:44.402 19:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.402 19:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.402 19:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.402 19:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:44.402 19:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:44.402 19:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:44.402 19:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:44.660 19:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:14:44.660 19:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:44.660 19:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:44.660 19:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:44.660 19:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:44.660 19:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:44.660 19:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:44.660 19:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.660 19:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.660 19:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.660 19:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:44.660 19:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:44.660 19:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:45.228 00:14:45.228 19:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:45.228 19:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:45.228 19:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:45.487 19:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:45.487 19:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:45.487 19:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.487 19:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.487 19:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.487 19:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:45.487 { 00:14:45.487 "cntlid": 137, 00:14:45.487 "qid": 0, 00:14:45.487 "state": "enabled", 00:14:45.487 "thread": "nvmf_tgt_poll_group_000", 00:14:45.487 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1", 00:14:45.487 "listen_address": { 00:14:45.487 "trtype": "TCP", 00:14:45.487 "adrfam": "IPv4", 00:14:45.487 "traddr": "10.0.0.3", 00:14:45.487 "trsvcid": "4420" 00:14:45.487 }, 00:14:45.487 "peer_address": { 00:14:45.487 "trtype": "TCP", 00:14:45.488 "adrfam": "IPv4", 00:14:45.488 "traddr": "10.0.0.1", 00:14:45.488 "trsvcid": "35674" 00:14:45.488 }, 00:14:45.488 "auth": { 00:14:45.488 "state": "completed", 00:14:45.488 "digest": "sha512", 00:14:45.488 "dhgroup": "ffdhe8192" 00:14:45.488 } 00:14:45.488 } 00:14:45.488 ]' 00:14:45.488 19:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:45.747 19:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:45.747 19:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:45.747 19:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:45.747 19:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:45.747 19:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:45.747 19:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:45.747 19:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:46.005 19:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDJhOWIwMDQyMmY0NjgxYzU0ZTY5ZDU0YmNhNjcwZGRjMDFhMzViOWQ5YTQwOGVieehMWQ==: --dhchap-ctrl-secret DHHC-1:03:NzcxZjJjNDBlMWFlMDIxNmE2OGM0OWVjNjc5ZTIwMjdhNjVkNjgzMmNjNWM3NGQxMDIwMTQzYWFlZGYzNWM5OMYeuPs=: 00:14:46.005 19:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --hostid bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -l 0 --dhchap-secret DHHC-1:00:ZDJhOWIwMDQyMmY0NjgxYzU0ZTY5ZDU0YmNhNjcwZGRjMDFhMzViOWQ5YTQwOGVieehMWQ==: --dhchap-ctrl-secret DHHC-1:03:NzcxZjJjNDBlMWFlMDIxNmE2OGM0OWVjNjc5ZTIwMjdhNjVkNjgzMmNjNWM3NGQxMDIwMTQzYWFlZGYzNWM5OMYeuPs=: 00:14:46.574 19:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:46.574 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:46.574 19:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:14:46.574 19:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.574 19:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.574 19:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.574 19:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:46.574 19:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:46.574 19:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:46.834 19:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:14:46.834 19:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:46.834 19:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:46.834 19:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:46.834 19:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:46.834 19:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:46.834 19:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:46.834 19:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.834 19:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.834 19:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.834 19:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:46.834 19:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:46.834 19:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:47.768 00:14:47.768 19:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:47.768 19:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:47.769 19:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:47.769 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:47.769 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:47.769 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.769 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.769 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.769 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:47.769 { 00:14:47.769 "cntlid": 139, 00:14:47.769 "qid": 0, 00:14:47.769 "state": "enabled", 00:14:47.769 "thread": "nvmf_tgt_poll_group_000", 00:14:47.769 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1", 00:14:47.769 "listen_address": { 00:14:47.769 "trtype": "TCP", 00:14:47.769 "adrfam": "IPv4", 00:14:47.769 "traddr": "10.0.0.3", 00:14:47.769 "trsvcid": "4420" 00:14:47.769 }, 00:14:47.769 "peer_address": { 00:14:47.769 "trtype": "TCP", 00:14:47.769 "adrfam": "IPv4", 00:14:47.769 "traddr": "10.0.0.1", 00:14:47.769 "trsvcid": "35712" 00:14:47.769 }, 00:14:47.769 "auth": { 00:14:47.769 "state": "completed", 00:14:47.769 "digest": "sha512", 00:14:47.769 "dhgroup": "ffdhe8192" 00:14:47.769 } 00:14:47.769 } 00:14:47.769 ]' 00:14:47.769 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:48.027 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:48.027 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:48.027 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:48.027 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:48.027 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:48.027 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:48.027 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:48.286 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2NhMzM5NWJkOTVlZGMwNzdjNjQ2NzBjMGJjOWU2ZGXFdSFf: --dhchap-ctrl-secret DHHC-1:02:NzRiNjdiOGZmZDQ1YjhiYzY2NGQ1NDE4OTkyMWVlNTgxZTNhNzI2OWVmNTc1OTJmPstlbA==: 00:14:48.286 19:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --hostid bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -l 0 --dhchap-secret DHHC-1:01:Y2NhMzM5NWJkOTVlZGMwNzdjNjQ2NzBjMGJjOWU2ZGXFdSFf: --dhchap-ctrl-secret DHHC-1:02:NzRiNjdiOGZmZDQ1YjhiYzY2NGQ1NDE4OTkyMWVlNTgxZTNhNzI2OWVmNTc1OTJmPstlbA==: 00:14:48.875 19:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:48.875 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:48.875 19:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:14:48.875 19:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.875 19:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.875 19:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.875 19:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:48.875 19:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:48.875 19:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:49.445 19:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:14:49.445 19:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:49.445 19:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:49.445 19:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:49.445 19:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:49.445 19:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:49.445 19:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:49.445 19:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.445 19:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.445 19:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.445 19:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:49.445 19:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:49.445 19:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:50.012 00:14:50.012 19:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:50.012 19:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:50.012 19:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:50.269 19:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:50.269 19:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:50.269 19:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.269 19:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.269 19:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.269 19:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:50.269 { 00:14:50.269 "cntlid": 141, 00:14:50.269 "qid": 0, 00:14:50.269 "state": "enabled", 00:14:50.269 "thread": "nvmf_tgt_poll_group_000", 00:14:50.269 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1", 00:14:50.269 "listen_address": { 00:14:50.269 "trtype": "TCP", 00:14:50.269 "adrfam": "IPv4", 00:14:50.269 "traddr": "10.0.0.3", 00:14:50.269 "trsvcid": "4420" 00:14:50.269 }, 00:14:50.269 "peer_address": { 00:14:50.269 "trtype": "TCP", 00:14:50.269 "adrfam": "IPv4", 00:14:50.269 "traddr": "10.0.0.1", 00:14:50.269 "trsvcid": "35740" 00:14:50.269 }, 00:14:50.269 "auth": { 00:14:50.269 "state": "completed", 00:14:50.269 "digest": "sha512", 00:14:50.269 "dhgroup": "ffdhe8192" 00:14:50.269 } 00:14:50.269 } 00:14:50.269 ]' 00:14:50.269 19:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:50.269 19:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:50.269 19:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:50.269 19:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:50.269 19:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:50.270 19:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:50.270 19:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:50.270 19:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:50.528 19:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmZhZTI5NTZjYjRkZjY4YzI3Zjc3MmYzNDRjNDMwNjViNGFmNGRmOGRiZjQ2N2FhSaZUaA==: --dhchap-ctrl-secret DHHC-1:01:NDYzOGIxMTVjM2QwYmNmZjViNDU0MzZhYzYxZjQxOGEW+wVW: 00:14:50.528 19:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --hostid bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -l 0 --dhchap-secret DHHC-1:02:ZmZhZTI5NTZjYjRkZjY4YzI3Zjc3MmYzNDRjNDMwNjViNGFmNGRmOGRiZjQ2N2FhSaZUaA==: --dhchap-ctrl-secret DHHC-1:01:NDYzOGIxMTVjM2QwYmNmZjViNDU0MzZhYzYxZjQxOGEW+wVW: 00:14:51.465 19:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:51.465 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:51.465 19:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:14:51.465 19:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.465 19:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.465 19:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.465 19:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:51.465 19:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:51.465 19:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:51.465 19:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:14:51.465 19:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:51.465 19:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:51.465 19:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:51.465 19:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:51.465 19:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:51.465 19:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --dhchap-key key3 00:14:51.465 19:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.465 19:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.465 19:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.465 19:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:51.465 19:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:51.465 19:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:52.032 00:14:52.032 19:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:52.032 19:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:52.032 19:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:52.598 19:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:52.598 19:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:52.598 19:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.598 19:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.598 19:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.598 19:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:52.598 { 00:14:52.598 "cntlid": 143, 00:14:52.598 "qid": 0, 00:14:52.598 "state": "enabled", 00:14:52.598 "thread": "nvmf_tgt_poll_group_000", 00:14:52.598 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1", 00:14:52.598 "listen_address": { 00:14:52.598 "trtype": "TCP", 00:14:52.598 "adrfam": "IPv4", 00:14:52.598 "traddr": "10.0.0.3", 00:14:52.598 "trsvcid": "4420" 00:14:52.598 }, 00:14:52.598 "peer_address": { 00:14:52.598 "trtype": "TCP", 00:14:52.598 "adrfam": "IPv4", 00:14:52.598 "traddr": "10.0.0.1", 00:14:52.598 "trsvcid": "35760" 00:14:52.598 }, 00:14:52.598 "auth": { 00:14:52.598 "state": "completed", 00:14:52.598 "digest": "sha512", 00:14:52.598 "dhgroup": "ffdhe8192" 00:14:52.598 } 00:14:52.598 } 00:14:52.598 ]' 00:14:52.598 19:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:52.598 19:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:52.598 19:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:52.598 19:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:52.598 19:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:52.598 19:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:52.598 19:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:52.598 19:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:52.857 19:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2FkMGI5YmI5ODdjZDEzZmU2YTM5NzNmNDg1NzdiYTVjOWQ1MmVjNzNmMmNlNzM5YmY0OGNiMjE3NjVkOTc1YXoxhJw=: 00:14:52.857 19:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --hostid bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -l 0 --dhchap-secret DHHC-1:03:M2FkMGI5YmI5ODdjZDEzZmU2YTM5NzNmNDg1NzdiYTVjOWQ1MmVjNzNmMmNlNzM5YmY0OGNiMjE3NjVkOTc1YXoxhJw=: 00:14:53.795 19:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:53.795 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:53.795 19:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:14:53.795 19:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.795 19:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.795 19:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.795 19:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:14:53.795 19:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:14:53.795 19:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:14:53.795 19:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:14:53.795 19:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:14:53.795 19:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:14:53.795 19:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:14:53.795 19:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:53.795 19:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:53.795 19:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:53.795 19:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:53.795 19:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:53.795 19:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:53.795 19:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.795 19:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.795 19:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.795 19:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:53.795 19:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:53.795 19:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:54.732 00:14:54.732 19:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:54.732 19:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:54.732 19:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:54.732 19:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:54.732 19:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:54.732 19:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.732 19:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.732 19:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.732 19:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:54.732 { 00:14:54.732 "cntlid": 145, 00:14:54.732 "qid": 0, 00:14:54.732 "state": "enabled", 00:14:54.732 "thread": "nvmf_tgt_poll_group_000", 00:14:54.732 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1", 00:14:54.732 "listen_address": { 00:14:54.732 "trtype": "TCP", 00:14:54.732 "adrfam": "IPv4", 00:14:54.732 "traddr": "10.0.0.3", 00:14:54.732 "trsvcid": "4420" 00:14:54.732 }, 00:14:54.732 "peer_address": { 00:14:54.732 "trtype": "TCP", 00:14:54.732 "adrfam": "IPv4", 00:14:54.732 "traddr": "10.0.0.1", 00:14:54.732 "trsvcid": "35772" 00:14:54.732 }, 00:14:54.732 "auth": { 00:14:54.732 "state": "completed", 00:14:54.732 "digest": "sha512", 00:14:54.732 "dhgroup": "ffdhe8192" 00:14:54.732 } 00:14:54.732 } 00:14:54.732 ]' 00:14:54.732 19:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:54.991 19:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:54.991 19:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:54.991 19:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:54.991 19:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:54.991 19:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:54.991 19:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:54.991 19:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:55.251 19:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDJhOWIwMDQyMmY0NjgxYzU0ZTY5ZDU0YmNhNjcwZGRjMDFhMzViOWQ5YTQwOGVieehMWQ==: --dhchap-ctrl-secret DHHC-1:03:NzcxZjJjNDBlMWFlMDIxNmE2OGM0OWVjNjc5ZTIwMjdhNjVkNjgzMmNjNWM3NGQxMDIwMTQzYWFlZGYzNWM5OMYeuPs=: 00:14:55.251 19:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --hostid bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -l 0 --dhchap-secret DHHC-1:00:ZDJhOWIwMDQyMmY0NjgxYzU0ZTY5ZDU0YmNhNjcwZGRjMDFhMzViOWQ5YTQwOGVieehMWQ==: --dhchap-ctrl-secret DHHC-1:03:NzcxZjJjNDBlMWFlMDIxNmE2OGM0OWVjNjc5ZTIwMjdhNjVkNjgzMmNjNWM3NGQxMDIwMTQzYWFlZGYzNWM5OMYeuPs=: 00:14:56.188 19:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:56.188 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:56.188 19:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:14:56.189 19:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.189 19:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.189 19:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.189 19:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --dhchap-key key1 00:14:56.189 19:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.189 19:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.189 19:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.189 19:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:14:56.189 19:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:14:56.189 19:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:14:56.189 19:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:14:56.189 19:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:56.189 19:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:14:56.189 19:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:56.189 19:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:14:56.189 19:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:14:56.189 19:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:14:56.756 request: 00:14:56.756 { 00:14:56.756 "name": "nvme0", 00:14:56.756 "trtype": "tcp", 00:14:56.756 "traddr": "10.0.0.3", 00:14:56.756 "adrfam": "ipv4", 00:14:56.756 "trsvcid": "4420", 00:14:56.756 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:56.756 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1", 00:14:56.756 "prchk_reftag": false, 00:14:56.756 "prchk_guard": false, 00:14:56.756 "hdgst": false, 00:14:56.756 "ddgst": false, 00:14:56.756 "dhchap_key": "key2", 00:14:56.756 "allow_unrecognized_csi": false, 00:14:56.756 "method": "bdev_nvme_attach_controller", 00:14:56.756 "req_id": 1 00:14:56.756 } 00:14:56.756 Got JSON-RPC error response 00:14:56.756 response: 00:14:56.756 { 00:14:56.756 "code": -5, 00:14:56.756 "message": "Input/output error" 00:14:56.756 } 00:14:56.756 19:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:14:56.756 19:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:56.756 19:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:56.756 19:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:56.756 19:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:14:56.756 19:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.756 19:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.756 19:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.756 19:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:56.756 19:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.756 19:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.756 19:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.756 19:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:14:56.756 19:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:14:56.756 19:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:14:56.756 19:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:14:56.756 19:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:56.756 19:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:14:56.756 19:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:56.756 19:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:14:56.756 19:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:14:56.756 19:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:14:57.324 request: 00:14:57.324 { 00:14:57.324 "name": "nvme0", 00:14:57.324 "trtype": "tcp", 00:14:57.324 "traddr": "10.0.0.3", 00:14:57.324 "adrfam": "ipv4", 00:14:57.324 "trsvcid": "4420", 00:14:57.324 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:57.324 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1", 00:14:57.324 "prchk_reftag": false, 00:14:57.324 "prchk_guard": false, 00:14:57.324 "hdgst": false, 00:14:57.324 "ddgst": false, 00:14:57.324 "dhchap_key": "key1", 00:14:57.324 "dhchap_ctrlr_key": "ckey2", 00:14:57.324 "allow_unrecognized_csi": false, 00:14:57.324 "method": "bdev_nvme_attach_controller", 00:14:57.324 "req_id": 1 00:14:57.324 } 00:14:57.324 Got JSON-RPC error response 00:14:57.324 response: 00:14:57.324 { 00:14:57.324 "code": -5, 00:14:57.324 "message": "Input/output error" 00:14:57.324 } 00:14:57.324 19:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:14:57.324 19:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:57.324 19:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:57.324 19:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:57.324 19:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:14:57.324 19:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.324 19:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.324 19:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.324 19:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --dhchap-key key1 00:14:57.324 19:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.324 19:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.324 19:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.324 19:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:57.324 19:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:14:57.324 19:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:57.324 19:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:14:57.324 19:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:57.324 19:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:14:57.324 19:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:57.324 19:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:57.324 19:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:57.324 19:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:57.891 request: 00:14:57.891 { 00:14:57.891 "name": "nvme0", 00:14:57.891 "trtype": "tcp", 00:14:57.891 "traddr": "10.0.0.3", 00:14:57.891 "adrfam": "ipv4", 00:14:57.891 "trsvcid": "4420", 00:14:57.891 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:57.891 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1", 00:14:57.891 "prchk_reftag": false, 00:14:57.891 "prchk_guard": false, 00:14:57.891 "hdgst": false, 00:14:57.891 "ddgst": false, 00:14:57.891 "dhchap_key": "key1", 00:14:57.891 "dhchap_ctrlr_key": "ckey1", 00:14:57.891 "allow_unrecognized_csi": false, 00:14:57.891 "method": "bdev_nvme_attach_controller", 00:14:57.891 "req_id": 1 00:14:57.891 } 00:14:57.891 Got JSON-RPC error response 00:14:57.891 response: 00:14:57.891 { 00:14:57.891 "code": -5, 00:14:57.891 "message": "Input/output error" 00:14:57.891 } 00:14:57.891 19:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:14:57.891 19:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:57.891 19:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:57.891 19:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:57.891 19:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:14:57.891 19:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.891 19:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.892 19:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.892 19:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 67219 00:14:57.892 19:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 67219 ']' 00:14:57.892 19:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 67219 00:14:57.892 19:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:14:57.892 19:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:57.892 19:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67219 00:14:57.892 killing process with pid 67219 00:14:57.892 19:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:57.892 19:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:57.892 19:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67219' 00:14:57.892 19:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 67219 00:14:57.892 19:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 67219 00:14:58.150 19:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:14:58.150 19:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:58.150 19:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:58.150 19:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.150 19:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=70250 00:14:58.150 19:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:14:58.150 19:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 70250 00:14:58.150 19:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 70250 ']' 00:14:58.150 19:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:58.150 19:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:58.150 19:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:58.150 19:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:58.150 19:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.082 19:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:59.082 19:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:14:59.082 19:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:59.082 19:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:59.082 19:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.340 19:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:59.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:59.341 19:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:14:59.341 19:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 70250 00:14:59.341 19:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 70250 ']' 00:14:59.341 19:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:59.341 19:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:59.341 19:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:59.341 19:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:59.341 19:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.599 19:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:59.599 19:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:14:59.599 19:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:14:59.599 19:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.599 19:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.599 null0 00:14:59.599 19:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.599 19:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:14:59.599 19:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Myl 00:14:59.599 19:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.599 19:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.858 19:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.858 19:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.jaV ]] 00:14:59.858 19:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.jaV 00:14:59.858 19:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.858 19:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.858 19:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.858 19:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:14:59.858 19:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.HgJ 00:14:59.858 19:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.858 19:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.858 19:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.858 19:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.eh2 ]] 00:14:59.858 19:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.eh2 00:14:59.858 19:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.858 19:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.858 19:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.858 19:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:14:59.858 19:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.GQo 00:14:59.858 19:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.858 19:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.858 19:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.858 19:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.3iZ ]] 00:14:59.858 19:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.3iZ 00:14:59.858 19:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.858 19:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.858 19:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.858 19:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:14:59.858 19:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.Gpx 00:14:59.858 19:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.858 19:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.858 19:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.858 19:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:14:59.858 19:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:14:59.858 19:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:59.858 19:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:59.858 19:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:59.858 19:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:59.858 19:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:59.858 19:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --dhchap-key key3 00:14:59.858 19:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.858 19:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.858 19:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.858 19:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:59.858 19:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:59.858 19:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:00.795 nvme0n1 00:15:00.795 19:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:00.795 19:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:00.795 19:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:01.054 19:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:01.054 19:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:01.054 19:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.054 19:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.054 19:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.054 19:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:01.054 { 00:15:01.054 "cntlid": 1, 00:15:01.054 "qid": 0, 00:15:01.054 "state": "enabled", 00:15:01.054 "thread": "nvmf_tgt_poll_group_000", 00:15:01.054 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1", 00:15:01.054 "listen_address": { 00:15:01.054 "trtype": "TCP", 00:15:01.054 "adrfam": "IPv4", 00:15:01.054 "traddr": "10.0.0.3", 00:15:01.054 "trsvcid": "4420" 00:15:01.054 }, 00:15:01.054 "peer_address": { 00:15:01.054 "trtype": "TCP", 00:15:01.054 "adrfam": "IPv4", 00:15:01.054 "traddr": "10.0.0.1", 00:15:01.054 "trsvcid": "50606" 00:15:01.054 }, 00:15:01.054 "auth": { 00:15:01.054 "state": "completed", 00:15:01.054 "digest": "sha512", 00:15:01.054 "dhgroup": "ffdhe8192" 00:15:01.054 } 00:15:01.054 } 00:15:01.054 ]' 00:15:01.054 19:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:01.054 19:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:01.054 19:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:01.320 19:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:01.320 19:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:01.320 19:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:01.320 19:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:01.320 19:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:01.597 19:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2FkMGI5YmI5ODdjZDEzZmU2YTM5NzNmNDg1NzdiYTVjOWQ1MmVjNzNmMmNlNzM5YmY0OGNiMjE3NjVkOTc1YXoxhJw=: 00:15:01.597 19:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --hostid bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -l 0 --dhchap-secret DHHC-1:03:M2FkMGI5YmI5ODdjZDEzZmU2YTM5NzNmNDg1NzdiYTVjOWQ1MmVjNzNmMmNlNzM5YmY0OGNiMjE3NjVkOTc1YXoxhJw=: 00:15:02.180 19:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:02.438 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:02.438 19:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:15:02.438 19:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.438 19:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.438 19:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.438 19:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --dhchap-key key3 00:15:02.438 19:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.438 19:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.438 19:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.438 19:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:15:02.438 19:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:15:02.697 19:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:15:02.697 19:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:02.697 19:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:15:02.697 19:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:15:02.697 19:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:02.697 19:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:15:02.697 19:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:02.697 19:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:02.697 19:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:02.697 19:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:02.956 request: 00:15:02.956 { 00:15:02.956 "name": "nvme0", 00:15:02.956 "trtype": "tcp", 00:15:02.956 "traddr": "10.0.0.3", 00:15:02.956 "adrfam": "ipv4", 00:15:02.956 "trsvcid": "4420", 00:15:02.956 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:02.956 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1", 00:15:02.956 "prchk_reftag": false, 00:15:02.956 "prchk_guard": false, 00:15:02.956 "hdgst": false, 00:15:02.956 "ddgst": false, 00:15:02.956 "dhchap_key": "key3", 00:15:02.956 "allow_unrecognized_csi": false, 00:15:02.956 "method": "bdev_nvme_attach_controller", 00:15:02.956 "req_id": 1 00:15:02.956 } 00:15:02.956 Got JSON-RPC error response 00:15:02.956 response: 00:15:02.956 { 00:15:02.956 "code": -5, 00:15:02.956 "message": "Input/output error" 00:15:02.956 } 00:15:02.956 19:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:15:02.956 19:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:02.956 19:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:02.956 19:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:02.956 19:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:15:02.956 19:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:15:02.956 19:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:15:02.956 19:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:15:03.243 19:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:15:03.243 19:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:03.243 19:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:15:03.243 19:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:15:03.243 19:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:03.244 19:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:15:03.244 19:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:03.244 19:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:03.244 19:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:03.244 19:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:03.502 request: 00:15:03.502 { 00:15:03.502 "name": "nvme0", 00:15:03.502 "trtype": "tcp", 00:15:03.502 "traddr": "10.0.0.3", 00:15:03.502 "adrfam": "ipv4", 00:15:03.502 "trsvcid": "4420", 00:15:03.502 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:03.502 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1", 00:15:03.502 "prchk_reftag": false, 00:15:03.502 "prchk_guard": false, 00:15:03.502 "hdgst": false, 00:15:03.502 "ddgst": false, 00:15:03.502 "dhchap_key": "key3", 00:15:03.502 "allow_unrecognized_csi": false, 00:15:03.502 "method": "bdev_nvme_attach_controller", 00:15:03.502 "req_id": 1 00:15:03.502 } 00:15:03.502 Got JSON-RPC error response 00:15:03.502 response: 00:15:03.502 { 00:15:03.502 "code": -5, 00:15:03.502 "message": "Input/output error" 00:15:03.502 } 00:15:03.502 19:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:15:03.502 19:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:03.502 19:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:03.503 19:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:03.503 19:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:15:03.503 19:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:15:03.503 19:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:15:03.503 19:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:03.503 19:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:03.503 19:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:03.762 19:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:15:03.762 19:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.762 19:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.762 19:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.762 19:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:15:03.762 19:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.762 19:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.762 19:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.762 19:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:03.762 19:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:03.763 19:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:03.763 19:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:15:03.763 19:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:03.763 19:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:15:03.763 19:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:03.763 19:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:03.763 19:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:03.763 19:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:04.331 request: 00:15:04.331 { 00:15:04.331 "name": "nvme0", 00:15:04.331 "trtype": "tcp", 00:15:04.331 "traddr": "10.0.0.3", 00:15:04.331 "adrfam": "ipv4", 00:15:04.331 "trsvcid": "4420", 00:15:04.331 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:04.331 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1", 00:15:04.331 "prchk_reftag": false, 00:15:04.331 "prchk_guard": false, 00:15:04.331 "hdgst": false, 00:15:04.331 "ddgst": false, 00:15:04.331 "dhchap_key": "key0", 00:15:04.331 "dhchap_ctrlr_key": "key1", 00:15:04.331 "allow_unrecognized_csi": false, 00:15:04.331 "method": "bdev_nvme_attach_controller", 00:15:04.331 "req_id": 1 00:15:04.331 } 00:15:04.331 Got JSON-RPC error response 00:15:04.331 response: 00:15:04.331 { 00:15:04.331 "code": -5, 00:15:04.331 "message": "Input/output error" 00:15:04.331 } 00:15:04.331 19:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:15:04.331 19:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:04.331 19:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:04.331 19:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:04.331 19:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:15:04.331 19:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:15:04.331 19:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:15:04.590 nvme0n1 00:15:04.590 19:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:15:04.590 19:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:15:04.590 19:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:04.849 19:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:04.849 19:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:04.849 19:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:05.108 19:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --dhchap-key key1 00:15:05.108 19:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.108 19:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.108 19:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.108 19:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:15:05.108 19:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:15:05.108 19:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:15:06.047 nvme0n1 00:15:06.047 19:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:15:06.047 19:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:15:06.047 19:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:06.305 19:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:06.305 19:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:06.305 19:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.305 19:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.305 19:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.305 19:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:15:06.305 19:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:15:06.305 19:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:06.563 19:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:06.563 19:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmZhZTI5NTZjYjRkZjY4YzI3Zjc3MmYzNDRjNDMwNjViNGFmNGRmOGRiZjQ2N2FhSaZUaA==: --dhchap-ctrl-secret DHHC-1:03:M2FkMGI5YmI5ODdjZDEzZmU2YTM5NzNmNDg1NzdiYTVjOWQ1MmVjNzNmMmNlNzM5YmY0OGNiMjE3NjVkOTc1YXoxhJw=: 00:15:06.563 19:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --hostid bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -l 0 --dhchap-secret DHHC-1:02:ZmZhZTI5NTZjYjRkZjY4YzI3Zjc3MmYzNDRjNDMwNjViNGFmNGRmOGRiZjQ2N2FhSaZUaA==: --dhchap-ctrl-secret DHHC-1:03:M2FkMGI5YmI5ODdjZDEzZmU2YTM5NzNmNDg1NzdiYTVjOWQ1MmVjNzNmMmNlNzM5YmY0OGNiMjE3NjVkOTc1YXoxhJw=: 00:15:07.517 19:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:15:07.517 19:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:15:07.517 19:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:15:07.517 19:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:15:07.517 19:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:15:07.517 19:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:15:07.517 19:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:15:07.517 19:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:07.517 19:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:07.517 19:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:15:07.517 19:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:07.517 19:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:15:07.517 19:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:15:07.517 19:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:07.517 19:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:15:07.517 19:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:07.517 19:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:15:07.517 19:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:15:07.517 19:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:15:08.251 request: 00:15:08.251 { 00:15:08.251 "name": "nvme0", 00:15:08.251 "trtype": "tcp", 00:15:08.251 "traddr": "10.0.0.3", 00:15:08.251 "adrfam": "ipv4", 00:15:08.251 "trsvcid": "4420", 00:15:08.251 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:08.251 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1", 00:15:08.251 "prchk_reftag": false, 00:15:08.251 "prchk_guard": false, 00:15:08.251 "hdgst": false, 00:15:08.251 "ddgst": false, 00:15:08.251 "dhchap_key": "key1", 00:15:08.251 "allow_unrecognized_csi": false, 00:15:08.251 "method": "bdev_nvme_attach_controller", 00:15:08.251 "req_id": 1 00:15:08.251 } 00:15:08.251 Got JSON-RPC error response 00:15:08.251 response: 00:15:08.251 { 00:15:08.251 "code": -5, 00:15:08.251 "message": "Input/output error" 00:15:08.251 } 00:15:08.251 19:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:15:08.251 19:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:08.251 19:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:08.251 19:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:08.251 19:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:08.251 19:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:08.251 19:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:09.187 nvme0n1 00:15:09.187 19:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:15:09.187 19:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:09.187 19:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:15:09.445 19:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:09.445 19:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:09.445 19:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:10.012 19:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:15:10.012 19:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.012 19:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.012 19:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.012 19:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:15:10.012 19:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:15:10.012 19:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:15:10.271 nvme0n1 00:15:10.271 19:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:15:10.271 19:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:15:10.271 19:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:10.529 19:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:10.529 19:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:10.529 19:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:10.789 19:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --dhchap-key key1 --dhchap-ctrlr-key key3 00:15:10.789 19:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.789 19:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.789 19:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.789 19:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:Y2NhMzM5NWJkOTVlZGMwNzdjNjQ2NzBjMGJjOWU2ZGXFdSFf: '' 2s 00:15:10.789 19:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:15:10.789 19:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:15:10.789 19:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:Y2NhMzM5NWJkOTVlZGMwNzdjNjQ2NzBjMGJjOWU2ZGXFdSFf: 00:15:10.789 19:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:15:10.789 19:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:15:10.789 19:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:15:10.789 19:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:Y2NhMzM5NWJkOTVlZGMwNzdjNjQ2NzBjMGJjOWU2ZGXFdSFf: ]] 00:15:10.789 19:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:Y2NhMzM5NWJkOTVlZGMwNzdjNjQ2NzBjMGJjOWU2ZGXFdSFf: 00:15:10.789 19:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:15:10.789 19:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:15:10.789 19:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:15:12.689 19:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:15:12.689 19:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:15:12.689 19:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:15:12.689 19:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:15:12.689 19:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:15:12.689 19:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:15:12.689 19:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:15:12.689 19:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --dhchap-key key1 --dhchap-ctrlr-key key2 00:15:12.689 19:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.689 19:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.689 19:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.689 19:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:ZmZhZTI5NTZjYjRkZjY4YzI3Zjc3MmYzNDRjNDMwNjViNGFmNGRmOGRiZjQ2N2FhSaZUaA==: 2s 00:15:12.689 19:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:15:12.689 19:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:15:12.689 19:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:15:12.689 19:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:ZmZhZTI5NTZjYjRkZjY4YzI3Zjc3MmYzNDRjNDMwNjViNGFmNGRmOGRiZjQ2N2FhSaZUaA==: 00:15:12.689 19:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:15:12.689 19:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:15:12.689 19:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:15:12.689 19:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:ZmZhZTI5NTZjYjRkZjY4YzI3Zjc3MmYzNDRjNDMwNjViNGFmNGRmOGRiZjQ2N2FhSaZUaA==: ]] 00:15:12.689 19:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:ZmZhZTI5NTZjYjRkZjY4YzI3Zjc3MmYzNDRjNDMwNjViNGFmNGRmOGRiZjQ2N2FhSaZUaA==: 00:15:12.689 19:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:15:12.689 19:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:15:15.235 19:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:15:15.235 19:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:15:15.235 19:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:15:15.235 19:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:15:15.235 19:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:15:15.235 19:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:15:15.235 19:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:15:15.235 19:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:15.235 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:15.235 19:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:15.235 19:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.235 19:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.235 19:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.235 19:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:15:15.235 19:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:15:15.235 19:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:15:15.803 nvme0n1 00:15:15.803 19:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:15.803 19:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.803 19:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.803 19:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.803 19:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:15.803 19:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:16.375 19:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:15:16.375 19:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:15:16.375 19:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:16.632 19:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:16.632 19:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:15:16.632 19:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.632 19:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.891 19:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.891 19:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:15:16.891 19:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:15:17.149 19:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:15:17.149 19:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:17.149 19:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:15:17.408 19:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:17.408 19:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:17.408 19:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.408 19:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.408 19:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.408 19:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:15:17.408 19:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:17.408 19:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:15:17.408 19:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:15:17.408 19:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:17.408 19:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:15:17.408 19:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:17.408 19:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:15:17.408 19:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:15:17.975 request: 00:15:17.975 { 00:15:17.975 "name": "nvme0", 00:15:17.975 "dhchap_key": "key1", 00:15:17.975 "dhchap_ctrlr_key": "key3", 00:15:17.975 "method": "bdev_nvme_set_keys", 00:15:17.975 "req_id": 1 00:15:17.975 } 00:15:17.975 Got JSON-RPC error response 00:15:17.975 response: 00:15:17.975 { 00:15:17.975 "code": -13, 00:15:17.975 "message": "Permission denied" 00:15:17.975 } 00:15:17.975 19:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:15:17.975 19:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:17.975 19:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:17.975 19:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:17.975 19:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:15:17.975 19:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:15:17.975 19:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:18.234 19:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:15:18.234 19:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:15:19.177 19:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:15:19.177 19:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:15:19.177 19:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:19.435 19:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:15:19.435 19:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:19.435 19:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.435 19:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.435 19:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.435 19:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:15:19.436 19:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:15:19.436 19:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:15:20.814 nvme0n1 00:15:20.814 19:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:20.814 19:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.814 19:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.814 19:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.814 19:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:15:20.814 19:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:20.814 19:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:15:20.814 19:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:15:20.814 19:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:20.814 19:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:15:20.814 19:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:20.814 19:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:15:20.814 19:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:15:21.380 request: 00:15:21.380 { 00:15:21.380 "name": "nvme0", 00:15:21.380 "dhchap_key": "key2", 00:15:21.380 "dhchap_ctrlr_key": "key0", 00:15:21.380 "method": "bdev_nvme_set_keys", 00:15:21.380 "req_id": 1 00:15:21.380 } 00:15:21.380 Got JSON-RPC error response 00:15:21.380 response: 00:15:21.380 { 00:15:21.380 "code": -13, 00:15:21.380 "message": "Permission denied" 00:15:21.380 } 00:15:21.380 19:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:15:21.380 19:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:21.380 19:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:21.380 19:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:21.380 19:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:15:21.380 19:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:15:21.380 19:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:21.380 19:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:15:21.380 19:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:15:22.753 19:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:15:22.753 19:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:15:22.753 19:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:22.753 19:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:15:22.753 19:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:15:22.753 19:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:15:22.753 19:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 67238 00:15:22.753 19:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 67238 ']' 00:15:22.753 19:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 67238 00:15:22.753 19:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:15:22.753 19:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:22.753 19:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67238 00:15:22.753 killing process with pid 67238 00:15:22.753 19:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:22.753 19:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:22.753 19:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67238' 00:15:22.753 19:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 67238 00:15:22.753 19:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 67238 00:15:23.321 19:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:15:23.321 19:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:23.321 19:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:15:23.321 19:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:23.321 19:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:15:23.321 19:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:23.321 19:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:23.321 rmmod nvme_tcp 00:15:23.321 rmmod nvme_fabrics 00:15:23.321 rmmod nvme_keyring 00:15:23.321 19:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:23.321 19:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:15:23.321 19:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:15:23.321 19:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 70250 ']' 00:15:23.321 19:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 70250 00:15:23.321 19:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 70250 ']' 00:15:23.321 19:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 70250 00:15:23.321 19:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:15:23.321 19:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:23.321 19:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70250 00:15:23.321 killing process with pid 70250 00:15:23.321 19:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:23.321 19:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:23.321 19:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70250' 00:15:23.321 19:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 70250 00:15:23.321 19:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 70250 00:15:23.581 19:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:23.581 19:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:23.581 19:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:23.581 19:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:15:23.581 19:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:15:23.581 19:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:23.581 19:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:15:23.581 19:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:23.581 19:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:23.581 19:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:23.581 19:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:23.581 19:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:23.581 19:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:23.581 19:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:23.581 19:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:23.581 19:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:23.581 19:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:23.581 19:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:23.581 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:23.839 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:23.839 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:23.839 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:23.839 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:23.839 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:23.839 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:23.839 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:23.839 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@300 -- # return 0 00:15:23.839 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.Myl /tmp/spdk.key-sha256.HgJ /tmp/spdk.key-sha384.GQo /tmp/spdk.key-sha512.Gpx /tmp/spdk.key-sha512.jaV /tmp/spdk.key-sha384.eh2 /tmp/spdk.key-sha256.3iZ '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:15:23.839 00:15:23.839 real 3m7.376s 00:15:23.839 user 7m29.682s 00:15:23.839 sys 0m29.294s 00:15:23.839 ************************************ 00:15:23.839 END TEST nvmf_auth_target 00:15:23.839 ************************************ 00:15:23.839 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:23.839 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.839 19:35:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:15:23.839 19:35:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:15:23.839 19:35:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:15:23.839 19:35:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:23.839 19:35:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:23.839 ************************************ 00:15:23.839 START TEST nvmf_bdevio_no_huge 00:15:23.839 ************************************ 00:15:23.839 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:15:23.839 * Looking for test storage... 00:15:23.839 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:23.839 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:23.839 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lcov --version 00:15:23.839 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:24.099 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:24.099 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:24.099 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:24.099 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:24.099 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:15:24.099 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:15:24.099 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:15:24.099 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:15:24.099 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:15:24.099 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:15:24.099 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:15:24.099 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:24.099 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:15:24.099 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:15:24.099 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:24.099 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:24.099 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:15:24.099 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:15:24.099 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:24.099 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:15:24.099 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:15:24.099 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:15:24.099 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:15:24.099 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:24.099 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:15:24.099 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:15:24.099 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:24.099 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:24.099 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:15:24.099 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:24.099 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:24.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:24.099 --rc genhtml_branch_coverage=1 00:15:24.099 --rc genhtml_function_coverage=1 00:15:24.099 --rc genhtml_legend=1 00:15:24.099 --rc geninfo_all_blocks=1 00:15:24.099 --rc geninfo_unexecuted_blocks=1 00:15:24.099 00:15:24.099 ' 00:15:24.099 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:24.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:24.099 --rc genhtml_branch_coverage=1 00:15:24.099 --rc genhtml_function_coverage=1 00:15:24.099 --rc genhtml_legend=1 00:15:24.099 --rc geninfo_all_blocks=1 00:15:24.099 --rc geninfo_unexecuted_blocks=1 00:15:24.099 00:15:24.099 ' 00:15:24.099 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:24.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:24.099 --rc genhtml_branch_coverage=1 00:15:24.099 --rc genhtml_function_coverage=1 00:15:24.099 --rc genhtml_legend=1 00:15:24.099 --rc geninfo_all_blocks=1 00:15:24.099 --rc geninfo_unexecuted_blocks=1 00:15:24.099 00:15:24.099 ' 00:15:24.099 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:24.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:24.099 --rc genhtml_branch_coverage=1 00:15:24.099 --rc genhtml_function_coverage=1 00:15:24.099 --rc genhtml_legend=1 00:15:24.099 --rc geninfo_all_blocks=1 00:15:24.099 --rc geninfo_unexecuted_blocks=1 00:15:24.099 00:15:24.099 ' 00:15:24.099 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:24.099 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:15:24.099 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:24.099 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:24.099 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:24.099 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:24.099 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:24.099 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:24.099 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:24.099 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:24.099 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:24.099 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:24.099 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:15:24.099 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:15:24.099 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:24.099 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:24.099 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:24.099 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:24.099 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:24.099 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:15:24.099 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:24.099 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:24.099 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:24.099 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:24.099 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:24.099 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:24.099 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:15:24.099 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:24.099 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:15:24.099 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:24.099 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:24.099 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:24.099 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:24.099 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:24.099 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:24.099 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:24.099 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:24.100 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:24.100 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:24.100 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:24.100 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:24.100 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:15:24.100 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:24.100 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:24.100 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:24.100 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:24.100 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:24.100 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:24.100 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:24.100 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:24.100 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:24.100 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:24.100 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:24.100 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:24.100 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:24.100 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:24.100 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:24.100 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:24.100 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:24.100 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:24.100 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:24.100 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:24.100 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:24.100 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:24.100 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:24.100 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:24.100 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:24.100 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:24.100 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:24.100 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:24.100 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:24.100 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:24.100 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:24.100 Cannot find device "nvmf_init_br" 00:15:24.100 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:15:24.100 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:24.100 Cannot find device "nvmf_init_br2" 00:15:24.100 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:15:24.100 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:24.100 Cannot find device "nvmf_tgt_br" 00:15:24.100 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # true 00:15:24.100 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:24.100 Cannot find device "nvmf_tgt_br2" 00:15:24.100 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # true 00:15:24.100 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:24.100 Cannot find device "nvmf_init_br" 00:15:24.100 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # true 00:15:24.100 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:24.100 Cannot find device "nvmf_init_br2" 00:15:24.100 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # true 00:15:24.100 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:24.100 Cannot find device "nvmf_tgt_br" 00:15:24.100 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # true 00:15:24.100 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:24.100 Cannot find device "nvmf_tgt_br2" 00:15:24.100 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # true 00:15:24.100 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:24.100 Cannot find device "nvmf_br" 00:15:24.100 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # true 00:15:24.100 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:24.100 Cannot find device "nvmf_init_if" 00:15:24.100 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # true 00:15:24.100 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:24.360 Cannot find device "nvmf_init_if2" 00:15:24.360 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # true 00:15:24.360 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:24.360 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:24.360 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # true 00:15:24.360 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:24.360 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:24.360 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # true 00:15:24.360 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:24.360 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:24.360 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:24.360 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:24.360 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:24.360 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:24.360 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:24.360 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:24.360 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:24.360 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:24.360 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:24.360 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:24.360 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:24.360 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:24.360 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:24.360 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:24.360 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:24.360 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:24.360 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:24.360 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:24.360 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:24.360 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:24.360 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:24.360 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:24.360 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:24.360 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:24.360 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:24.360 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:24.360 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:24.360 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:24.360 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:24.360 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:24.360 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:24.360 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:24.360 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:15:24.360 00:15:24.360 --- 10.0.0.3 ping statistics --- 00:15:24.360 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:24.360 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:15:24.360 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:24.360 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:24.360 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.044 ms 00:15:24.360 00:15:24.360 --- 10.0.0.4 ping statistics --- 00:15:24.360 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:24.361 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:15:24.361 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:24.361 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:24.361 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:15:24.361 00:15:24.361 --- 10.0.0.1 ping statistics --- 00:15:24.361 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:24.361 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:15:24.361 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:24.361 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:24.361 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:15:24.361 00:15:24.361 --- 10.0.0.2 ping statistics --- 00:15:24.361 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:24.361 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:15:24.361 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:24.361 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@461 -- # return 0 00:15:24.361 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:24.361 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:24.361 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:24.361 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:24.361 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:24.361 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:24.361 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:24.620 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:15:24.620 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:24.620 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:24.620 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:24.620 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=70897 00:15:24.620 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:15:24.620 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 70897 00:15:24.620 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 70897 ']' 00:15:24.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:24.620 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:24.620 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:24.620 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:24.620 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:24.620 19:35:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:24.620 [2024-12-05 19:35:17.883405] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:15:24.620 [2024-12-05 19:35:17.883509] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:15:24.620 [2024-12-05 19:35:18.049321] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:24.880 [2024-12-05 19:35:18.135727] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:24.880 [2024-12-05 19:35:18.135778] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:24.880 [2024-12-05 19:35:18.135794] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:24.880 [2024-12-05 19:35:18.135805] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:24.880 [2024-12-05 19:35:18.135814] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:24.880 [2024-12-05 19:35:18.136808] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:15:24.880 [2024-12-05 19:35:18.136917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:15:24.880 [2024-12-05 19:35:18.138225] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:15:24.880 [2024-12-05 19:35:18.138241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:24.880 [2024-12-05 19:35:18.144574] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:25.817 19:35:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:25.817 19:35:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:15:25.817 19:35:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:25.817 19:35:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:25.817 19:35:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:25.817 19:35:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:25.817 19:35:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:25.817 19:35:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.818 19:35:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:25.818 [2024-12-05 19:35:18.985631] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:25.818 19:35:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.818 19:35:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:25.818 19:35:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.818 19:35:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:25.818 Malloc0 00:15:25.818 19:35:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.818 19:35:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:25.818 19:35:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.818 19:35:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:25.818 19:35:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.818 19:35:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:25.818 19:35:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.818 19:35:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:25.818 19:35:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.818 19:35:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:25.818 19:35:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.818 19:35:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:25.818 [2024-12-05 19:35:19.030095] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:25.818 19:35:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.818 19:35:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:15:25.818 19:35:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:15:25.818 19:35:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:15:25.818 19:35:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:15:25.818 19:35:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:25.818 19:35:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:25.818 { 00:15:25.818 "params": { 00:15:25.818 "name": "Nvme$subsystem", 00:15:25.818 "trtype": "$TEST_TRANSPORT", 00:15:25.818 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:25.818 "adrfam": "ipv4", 00:15:25.818 "trsvcid": "$NVMF_PORT", 00:15:25.818 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:25.818 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:25.818 "hdgst": ${hdgst:-false}, 00:15:25.818 "ddgst": ${ddgst:-false} 00:15:25.818 }, 00:15:25.818 "method": "bdev_nvme_attach_controller" 00:15:25.818 } 00:15:25.818 EOF 00:15:25.818 )") 00:15:25.818 19:35:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:15:25.818 19:35:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:15:25.818 19:35:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:15:25.818 19:35:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:15:25.818 "params": { 00:15:25.818 "name": "Nvme1", 00:15:25.818 "trtype": "tcp", 00:15:25.818 "traddr": "10.0.0.3", 00:15:25.818 "adrfam": "ipv4", 00:15:25.818 "trsvcid": "4420", 00:15:25.818 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:25.818 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:25.818 "hdgst": false, 00:15:25.818 "ddgst": false 00:15:25.818 }, 00:15:25.818 "method": "bdev_nvme_attach_controller" 00:15:25.818 }' 00:15:25.818 [2024-12-05 19:35:19.100620] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:15:25.818 [2024-12-05 19:35:19.100943] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid70933 ] 00:15:26.105 [2024-12-05 19:35:19.272681] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:26.105 [2024-12-05 19:35:19.356199] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:26.105 [2024-12-05 19:35:19.356262] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:26.105 [2024-12-05 19:35:19.356267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:26.105 [2024-12-05 19:35:19.370883] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:26.372 I/O targets: 00:15:26.372 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:15:26.372 00:15:26.372 00:15:26.372 CUnit - A unit testing framework for C - Version 2.1-3 00:15:26.372 http://cunit.sourceforge.net/ 00:15:26.372 00:15:26.372 00:15:26.372 Suite: bdevio tests on: Nvme1n1 00:15:26.372 Test: blockdev write read block ...passed 00:15:26.372 Test: blockdev write zeroes read block ...passed 00:15:26.372 Test: blockdev write zeroes read no split ...passed 00:15:26.372 Test: blockdev write zeroes read split ...passed 00:15:26.372 Test: blockdev write zeroes read split partial ...passed 00:15:26.372 Test: blockdev reset ...[2024-12-05 19:35:19.614088] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:15:26.372 [2024-12-05 19:35:19.614504] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1de0320 (9): Bad file descriptor 00:15:26.372 [2024-12-05 19:35:19.628682] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:15:26.372 passed 00:15:26.372 Test: blockdev write read 8 blocks ...passed 00:15:26.372 Test: blockdev write read size > 128k ...passed 00:15:26.372 Test: blockdev write read invalid size ...passed 00:15:26.372 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:26.372 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:26.372 Test: blockdev write read max offset ...passed 00:15:26.372 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:26.372 Test: blockdev writev readv 8 blocks ...passed 00:15:26.372 Test: blockdev writev readv 30 x 1block ...passed 00:15:26.372 Test: blockdev writev readv block ...passed 00:15:26.372 Test: blockdev writev readv size > 128k ...passed 00:15:26.372 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:26.372 Test: blockdev comparev and writev ...[2024-12-05 19:35:19.638179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:26.372 [2024-12-05 19:35:19.638234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:26.372 [2024-12-05 19:35:19.638273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:26.372 [2024-12-05 19:35:19.638287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:15:26.372 [2024-12-05 19:35:19.638591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:26.372 [2024-12-05 19:35:19.638612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:15:26.372 [2024-12-05 19:35:19.638632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:26.372 [2024-12-05 19:35:19.638645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:15:26.372 [2024-12-05 19:35:19.638961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:26.372 [2024-12-05 19:35:19.638986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:15:26.372 [2024-12-05 19:35:19.639008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:26.372 [2024-12-05 19:35:19.639020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:15:26.372 [2024-12-05 19:35:19.639424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:26.372 [2024-12-05 19:35:19.639460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:15:26.372 [2024-12-05 19:35:19.639483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:26.372 [2024-12-05 19:35:19.639496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:15:26.372 passed 00:15:26.372 Test: blockdev nvme passthru rw ...passed 00:15:26.372 Test: blockdev nvme passthru vendor specific ...[2024-12-05 19:35:19.640332] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:26.372 [2024-12-05 19:35:19.640372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:15:26.372 [2024-12-05 19:35:19.640495] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:26.372 [2024-12-05 19:35:19.640515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:15:26.372 passed 00:15:26.372 Test: blockdev nvme admin passthru ...[2024-12-05 19:35:19.640639] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:26.372 [2024-12-05 19:35:19.640665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:15:26.372 [2024-12-05 19:35:19.640782] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:26.372 [2024-12-05 19:35:19.640801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:15:26.372 passed 00:15:26.372 Test: blockdev copy ...passed 00:15:26.372 00:15:26.372 Run Summary: Type Total Ran Passed Failed Inactive 00:15:26.372 suites 1 1 n/a 0 0 00:15:26.372 tests 23 23 23 0 0 00:15:26.372 asserts 152 152 152 0 n/a 00:15:26.372 00:15:26.372 Elapsed time = 0.176 seconds 00:15:26.632 19:35:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:26.632 19:35:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.632 19:35:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:26.632 19:35:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.632 19:35:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:15:26.632 19:35:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:15:26.632 19:35:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:26.632 19:35:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:15:26.891 19:35:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:26.891 19:35:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:15:26.891 19:35:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:26.891 19:35:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:26.891 rmmod nvme_tcp 00:15:26.891 rmmod nvme_fabrics 00:15:26.891 rmmod nvme_keyring 00:15:26.891 19:35:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:26.891 19:35:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:15:26.891 19:35:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:15:26.891 19:35:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 70897 ']' 00:15:26.891 19:35:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 70897 00:15:26.891 19:35:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 70897 ']' 00:15:26.891 19:35:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 70897 00:15:26.891 19:35:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:15:26.891 19:35:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:26.891 19:35:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70897 00:15:26.891 19:35:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:15:26.891 19:35:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:15:26.891 killing process with pid 70897 00:15:26.891 19:35:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70897' 00:15:26.891 19:35:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 70897 00:15:26.891 19:35:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 70897 00:15:27.150 19:35:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:27.150 19:35:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:27.150 19:35:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:27.150 19:35:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:15:27.150 19:35:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:15:27.150 19:35:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:15:27.150 19:35:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:27.150 19:35:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:27.150 19:35:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:27.150 19:35:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:27.409 19:35:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:27.409 19:35:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:27.409 19:35:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:27.409 19:35:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:27.409 19:35:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:27.410 19:35:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:27.410 19:35:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:27.410 19:35:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:27.410 19:35:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:27.410 19:35:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:27.410 19:35:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:27.410 19:35:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:27.410 19:35:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:27.410 19:35:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:27.410 19:35:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:27.410 19:35:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:27.669 19:35:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@300 -- # return 0 00:15:27.669 00:15:27.669 real 0m3.674s 00:15:27.669 user 0m11.183s 00:15:27.669 sys 0m1.474s 00:15:27.669 19:35:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:27.669 ************************************ 00:15:27.669 END TEST nvmf_bdevio_no_huge 00:15:27.669 ************************************ 00:15:27.669 19:35:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:27.669 19:35:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:15:27.669 19:35:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:27.669 19:35:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:27.669 19:35:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:27.669 ************************************ 00:15:27.669 START TEST nvmf_tls 00:15:27.669 ************************************ 00:15:27.669 19:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:15:27.669 * Looking for test storage... 00:15:27.669 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:27.670 19:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:27.670 19:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:27.670 19:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lcov --version 00:15:27.670 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:27.670 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:27.670 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:27.670 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:27.670 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:15:27.670 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:15:27.670 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:15:27.670 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:15:27.670 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:15:27.670 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:15:27.670 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:15:27.670 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:27.670 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:15:27.670 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:15:27.670 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:27.670 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:27.670 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:15:27.670 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:15:27.670 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:27.670 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:15:27.670 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:15:27.670 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:15:27.670 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:15:27.670 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:27.670 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:15:27.670 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:15:27.670 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:27.670 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:27.670 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:15:27.670 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:27.670 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:27.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:27.670 --rc genhtml_branch_coverage=1 00:15:27.670 --rc genhtml_function_coverage=1 00:15:27.670 --rc genhtml_legend=1 00:15:27.670 --rc geninfo_all_blocks=1 00:15:27.670 --rc geninfo_unexecuted_blocks=1 00:15:27.670 00:15:27.670 ' 00:15:27.670 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:27.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:27.670 --rc genhtml_branch_coverage=1 00:15:27.670 --rc genhtml_function_coverage=1 00:15:27.670 --rc genhtml_legend=1 00:15:27.670 --rc geninfo_all_blocks=1 00:15:27.670 --rc geninfo_unexecuted_blocks=1 00:15:27.670 00:15:27.670 ' 00:15:27.670 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:27.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:27.670 --rc genhtml_branch_coverage=1 00:15:27.670 --rc genhtml_function_coverage=1 00:15:27.670 --rc genhtml_legend=1 00:15:27.670 --rc geninfo_all_blocks=1 00:15:27.670 --rc geninfo_unexecuted_blocks=1 00:15:27.670 00:15:27.670 ' 00:15:27.670 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:27.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:27.670 --rc genhtml_branch_coverage=1 00:15:27.670 --rc genhtml_function_coverage=1 00:15:27.670 --rc genhtml_legend=1 00:15:27.670 --rc geninfo_all_blocks=1 00:15:27.670 --rc geninfo_unexecuted_blocks=1 00:15:27.670 00:15:27.670 ' 00:15:27.670 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:27.670 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:15:27.670 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:27.670 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:27.670 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:27.670 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:27.670 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:27.670 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:27.670 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:27.670 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:27.670 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:27.670 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:27.670 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:15:27.670 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:15:27.670 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:27.670 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:27.670 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:27.670 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:27.670 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:27.670 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:15:27.670 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:27.670 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:27.670 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:27.671 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.671 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.671 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.671 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:15:27.671 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.671 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:15:27.671 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:27.671 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:27.671 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:27.671 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:27.671 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:27.671 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:27.671 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:27.671 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:27.671 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:27.671 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:27.671 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:27.671 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:15:27.671 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:27.671 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:27.671 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:27.671 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:27.671 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:27.671 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:27.671 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:27.671 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:27.930 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:27.930 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:27.930 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:27.930 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:27.930 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:27.930 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:27.930 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:27.930 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:27.930 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:27.930 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:27.930 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:27.930 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:27.930 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:27.930 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:27.930 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:27.930 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:27.930 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:27.930 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:27.930 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:27.930 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:27.930 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:27.930 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:27.930 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:27.930 Cannot find device "nvmf_init_br" 00:15:27.930 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # true 00:15:27.930 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:27.930 Cannot find device "nvmf_init_br2" 00:15:27.930 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # true 00:15:27.930 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:27.930 Cannot find device "nvmf_tgt_br" 00:15:27.930 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # true 00:15:27.930 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:27.930 Cannot find device "nvmf_tgt_br2" 00:15:27.930 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # true 00:15:27.930 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:27.930 Cannot find device "nvmf_init_br" 00:15:27.930 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # true 00:15:27.930 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:27.930 Cannot find device "nvmf_init_br2" 00:15:27.930 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # true 00:15:27.930 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:27.930 Cannot find device "nvmf_tgt_br" 00:15:27.930 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # true 00:15:27.930 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:27.930 Cannot find device "nvmf_tgt_br2" 00:15:27.930 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # true 00:15:27.930 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:27.930 Cannot find device "nvmf_br" 00:15:27.930 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # true 00:15:27.930 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:27.930 Cannot find device "nvmf_init_if" 00:15:27.930 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # true 00:15:27.930 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:27.930 Cannot find device "nvmf_init_if2" 00:15:27.930 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # true 00:15:27.930 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:27.930 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:27.930 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # true 00:15:27.930 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:27.930 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:27.930 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # true 00:15:27.930 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:27.930 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:27.930 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:27.930 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:27.930 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:27.930 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:27.930 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:27.930 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:27.930 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:27.930 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:27.930 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:27.930 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:27.930 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:27.930 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:27.930 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:27.930 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:27.930 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:27.930 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:27.930 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:27.930 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:27.930 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:27.930 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:28.188 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:28.188 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:28.188 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:28.188 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:28.188 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:28.188 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:28.188 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:28.188 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:28.188 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:28.188 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:28.188 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:28.188 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:28.188 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:15:28.188 00:15:28.188 --- 10.0.0.3 ping statistics --- 00:15:28.188 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:28.188 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:15:28.188 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:28.188 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:28.188 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:15:28.188 00:15:28.188 --- 10.0.0.4 ping statistics --- 00:15:28.188 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:28.188 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:15:28.188 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:28.188 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:28.188 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:15:28.188 00:15:28.188 --- 10.0.0.1 ping statistics --- 00:15:28.188 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:28.188 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:15:28.188 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:28.188 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:28.188 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.092 ms 00:15:28.188 00:15:28.188 --- 10.0.0.2 ping statistics --- 00:15:28.188 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:28.188 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:15:28.188 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:28.188 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@461 -- # return 0 00:15:28.188 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:28.188 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:28.188 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:28.188 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:28.188 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:28.188 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:28.189 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:28.189 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:15:28.189 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:28.189 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:28.189 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:28.189 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71179 00:15:28.189 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71179 00:15:28.189 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71179 ']' 00:15:28.189 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:28.189 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:15:28.189 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:28.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:28.189 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:28.189 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:28.189 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:28.189 [2024-12-05 19:35:21.529723] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:15:28.189 [2024-12-05 19:35:21.529823] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:28.447 [2024-12-05 19:35:21.683107] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:28.447 [2024-12-05 19:35:21.752629] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:28.447 [2024-12-05 19:35:21.752704] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:28.447 [2024-12-05 19:35:21.752729] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:28.447 [2024-12-05 19:35:21.752748] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:28.447 [2024-12-05 19:35:21.752757] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:28.447 [2024-12-05 19:35:21.753218] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:28.447 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:28.447 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:28.447 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:28.447 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:28.447 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:28.447 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:28.447 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:15:28.447 19:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:15:28.705 true 00:15:28.705 19:35:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:28.705 19:35:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:15:28.964 19:35:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:15:28.964 19:35:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:15:28.964 19:35:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:15:29.531 19:35:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:15:29.531 19:35:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:29.531 19:35:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:15:29.531 19:35:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:15:29.531 19:35:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:15:29.790 19:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:29.790 19:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:15:30.048 19:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:15:30.048 19:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:15:30.048 19:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:30.048 19:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:15:30.308 19:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:15:30.308 19:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:15:30.308 19:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:15:30.567 19:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:30.567 19:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:15:30.827 19:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:15:30.827 19:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:15:30.827 19:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:15:31.086 19:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:31.086 19:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:15:31.358 19:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:15:31.358 19:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:15:31.358 19:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:15:31.358 19:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:15:31.358 19:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:15:31.358 19:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:15:31.358 19:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:15:31.358 19:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:15:31.358 19:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:15:31.358 19:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:15:31.358 19:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:15:31.358 19:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:15:31.358 19:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:15:31.358 19:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:15:31.358 19:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:15:31.358 19:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:15:31.358 19:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:15:31.618 19:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:15:31.618 19:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:15:31.618 19:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.tlacv7ULS5 00:15:31.618 19:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:15:31.618 19:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.tMu1cCdqw3 00:15:31.618 19:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:15:31.618 19:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:15:31.618 19:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.tlacv7ULS5 00:15:31.618 19:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.tMu1cCdqw3 00:15:31.618 19:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:15:31.878 19:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:15:32.136 [2024-12-05 19:35:25.496379] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:32.136 19:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.tlacv7ULS5 00:15:32.136 19:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.tlacv7ULS5 00:15:32.136 19:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:32.395 [2024-12-05 19:35:25.760819] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:32.395 19:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:32.653 19:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:15:32.911 [2024-12-05 19:35:26.284982] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:32.911 [2024-12-05 19:35:26.285240] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:32.911 19:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:33.171 malloc0 00:15:33.171 19:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:33.431 19:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.tlacv7ULS5 00:15:33.689 19:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:15:33.949 19:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.tlacv7ULS5 00:15:46.186 Initializing NVMe Controllers 00:15:46.186 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:15:46.186 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:46.186 Initialization complete. Launching workers. 00:15:46.186 ======================================================== 00:15:46.186 Latency(us) 00:15:46.186 Device Information : IOPS MiB/s Average min max 00:15:46.186 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9987.37 39.01 6409.32 1747.19 9283.00 00:15:46.186 ======================================================== 00:15:46.186 Total : 9987.37 39.01 6409.32 1747.19 9283.00 00:15:46.186 00:15:46.186 19:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.tlacv7ULS5 00:15:46.186 19:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:46.186 19:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:46.186 19:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:46.186 19:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.tlacv7ULS5 00:15:46.186 19:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:46.186 19:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71405 00:15:46.186 19:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:46.186 19:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:46.186 19:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71405 /var/tmp/bdevperf.sock 00:15:46.186 19:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71405 ']' 00:15:46.186 19:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:46.186 19:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:46.186 19:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:46.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:46.186 19:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:46.186 19:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:46.186 [2024-12-05 19:35:37.511560] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:15:46.186 [2024-12-05 19:35:37.511689] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71405 ] 00:15:46.186 [2024-12-05 19:35:37.663863] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:46.186 [2024-12-05 19:35:37.715413] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:46.186 [2024-12-05 19:35:37.774735] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:46.186 19:35:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:46.186 19:35:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:46.186 19:35:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.tlacv7ULS5 00:15:46.186 19:35:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:15:46.186 [2024-12-05 19:35:39.033889] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:46.186 TLSTESTn1 00:15:46.186 19:35:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:15:46.186 Running I/O for 10 seconds... 00:15:48.059 4352.00 IOPS, 17.00 MiB/s [2024-12-05T19:35:42.434Z] 4352.00 IOPS, 17.00 MiB/s [2024-12-05T19:35:43.371Z] 4423.67 IOPS, 17.28 MiB/s [2024-12-05T19:35:44.324Z] 4447.00 IOPS, 17.37 MiB/s [2024-12-05T19:35:45.269Z] 4446.40 IOPS, 17.37 MiB/s [2024-12-05T19:35:46.646Z] 4447.50 IOPS, 17.37 MiB/s [2024-12-05T19:35:47.583Z] 4431.57 IOPS, 17.31 MiB/s [2024-12-05T19:35:48.520Z] 4414.50 IOPS, 17.24 MiB/s [2024-12-05T19:35:49.458Z] 4397.11 IOPS, 17.18 MiB/s [2024-12-05T19:35:49.458Z] 4392.80 IOPS, 17.16 MiB/s 00:15:56.021 Latency(us) 00:15:56.021 [2024-12-05T19:35:49.458Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:56.021 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:56.021 Verification LBA range: start 0x0 length 0x2000 00:15:56.021 TLSTESTn1 : 10.02 4398.06 17.18 0.00 0.00 29050.72 5838.66 21328.99 00:15:56.021 [2024-12-05T19:35:49.458Z] =================================================================================================================== 00:15:56.021 [2024-12-05T19:35:49.458Z] Total : 4398.06 17.18 0.00 0.00 29050.72 5838.66 21328.99 00:15:56.021 { 00:15:56.021 "results": [ 00:15:56.021 { 00:15:56.021 "job": "TLSTESTn1", 00:15:56.021 "core_mask": "0x4", 00:15:56.021 "workload": "verify", 00:15:56.021 "status": "finished", 00:15:56.021 "verify_range": { 00:15:56.021 "start": 0, 00:15:56.021 "length": 8192 00:15:56.021 }, 00:15:56.021 "queue_depth": 128, 00:15:56.021 "io_size": 4096, 00:15:56.021 "runtime": 10.016224, 00:15:56.021 "iops": 4398.064580025367, 00:15:56.021 "mibps": 17.17993976572409, 00:15:56.021 "io_failed": 0, 00:15:56.021 "io_timeout": 0, 00:15:56.021 "avg_latency_us": 29050.71508547749, 00:15:56.021 "min_latency_us": 5838.6618181818185, 00:15:56.021 "max_latency_us": 21328.98909090909 00:15:56.021 } 00:15:56.021 ], 00:15:56.021 "core_count": 1 00:15:56.021 } 00:15:56.021 19:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:56.021 19:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 71405 00:15:56.021 19:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71405 ']' 00:15:56.021 19:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71405 00:15:56.022 19:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:56.022 19:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:56.022 19:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71405 00:15:56.022 19:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:15:56.022 19:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:15:56.022 killing process with pid 71405 00:15:56.022 19:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71405' 00:15:56.022 Received shutdown signal, test time was about 10.000000 seconds 00:15:56.022 00:15:56.022 Latency(us) 00:15:56.022 [2024-12-05T19:35:49.459Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:56.022 [2024-12-05T19:35:49.459Z] =================================================================================================================== 00:15:56.022 [2024-12-05T19:35:49.459Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:56.022 19:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71405 00:15:56.022 19:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71405 00:15:56.281 19:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.tMu1cCdqw3 00:15:56.281 19:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:15:56.281 19:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.tMu1cCdqw3 00:15:56.281 19:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:15:56.281 19:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:56.281 19:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:15:56.281 19:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:56.281 19:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.tMu1cCdqw3 00:15:56.281 19:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:56.281 19:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:56.281 19:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:56.281 19:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.tMu1cCdqw3 00:15:56.281 19:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:56.281 19:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71540 00:15:56.281 19:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:56.281 19:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:56.281 19:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71540 /var/tmp/bdevperf.sock 00:15:56.281 19:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71540 ']' 00:15:56.281 19:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:56.281 19:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:56.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:56.281 19:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:56.281 19:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:56.281 19:35:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:56.281 [2024-12-05 19:35:49.568407] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:15:56.281 [2024-12-05 19:35:49.568517] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71540 ] 00:15:56.281 [2024-12-05 19:35:49.711526] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:56.539 [2024-12-05 19:35:49.767985] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:56.539 [2024-12-05 19:35:49.825282] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:57.475 19:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:57.475 19:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:57.475 19:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.tMu1cCdqw3 00:15:57.475 19:35:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:15:57.735 [2024-12-05 19:35:51.114968] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:57.735 [2024-12-05 19:35:51.120213] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:15:57.735 [2024-12-05 19:35:51.120776] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16a0ff0 (107): Transport endpoint is not connected 00:15:57.735 [2024-12-05 19:35:51.121765] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16a0ff0 (9): Bad file descriptor 00:15:57.735 [2024-12-05 19:35:51.122761] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:15:57.735 [2024-12-05 19:35:51.122783] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:15:57.735 [2024-12-05 19:35:51.122809] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:15:57.735 [2024-12-05 19:35:51.122824] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:15:57.735 request: 00:15:57.735 { 00:15:57.735 "name": "TLSTEST", 00:15:57.735 "trtype": "tcp", 00:15:57.735 "traddr": "10.0.0.3", 00:15:57.735 "adrfam": "ipv4", 00:15:57.735 "trsvcid": "4420", 00:15:57.735 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:57.735 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:57.735 "prchk_reftag": false, 00:15:57.735 "prchk_guard": false, 00:15:57.735 "hdgst": false, 00:15:57.735 "ddgst": false, 00:15:57.735 "psk": "key0", 00:15:57.735 "allow_unrecognized_csi": false, 00:15:57.735 "method": "bdev_nvme_attach_controller", 00:15:57.735 "req_id": 1 00:15:57.735 } 00:15:57.735 Got JSON-RPC error response 00:15:57.735 response: 00:15:57.735 { 00:15:57.735 "code": -5, 00:15:57.735 "message": "Input/output error" 00:15:57.735 } 00:15:57.735 19:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71540 00:15:57.736 19:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71540 ']' 00:15:57.736 19:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71540 00:15:57.736 19:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:57.736 19:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:57.736 19:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71540 00:15:57.736 killing process with pid 71540 00:15:57.736 Received shutdown signal, test time was about 10.000000 seconds 00:15:57.736 00:15:57.736 Latency(us) 00:15:57.736 [2024-12-05T19:35:51.173Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:57.736 [2024-12-05T19:35:51.173Z] =================================================================================================================== 00:15:57.736 [2024-12-05T19:35:51.173Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:57.736 19:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:15:57.736 19:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:15:57.736 19:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71540' 00:15:57.736 19:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71540 00:15:57.736 19:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71540 00:15:58.018 19:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:15:58.018 19:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:15:58.018 19:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:58.018 19:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:58.018 19:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:58.018 19:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.tlacv7ULS5 00:15:58.018 19:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:15:58.018 19:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.tlacv7ULS5 00:15:58.018 19:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:15:58.018 19:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:58.018 19:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:15:58.018 19:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:58.018 19:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.tlacv7ULS5 00:15:58.018 19:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:58.018 19:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:58.018 19:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:15:58.018 19:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.tlacv7ULS5 00:15:58.018 19:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:58.018 19:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71574 00:15:58.018 19:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:58.018 19:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:58.018 19:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71574 /var/tmp/bdevperf.sock 00:15:58.018 19:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71574 ']' 00:15:58.018 19:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:58.018 19:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:58.018 19:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:58.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:58.018 19:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:58.018 19:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:58.018 [2024-12-05 19:35:51.420141] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:15:58.018 [2024-12-05 19:35:51.420231] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71574 ] 00:15:58.277 [2024-12-05 19:35:51.566281] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:58.277 [2024-12-05 19:35:51.628083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:58.277 [2024-12-05 19:35:51.684864] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:59.212 19:35:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:59.212 19:35:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:59.212 19:35:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.tlacv7ULS5 00:15:59.471 19:35:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:15:59.729 [2024-12-05 19:35:52.934794] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:59.729 [2024-12-05 19:35:52.940246] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:15:59.729 [2024-12-05 19:35:52.940288] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:15:59.729 [2024-12-05 19:35:52.940341] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:15:59.729 [2024-12-05 19:35:52.940935] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2100ff0 (107): Transport endpoint is not connected 00:15:59.729 [2024-12-05 19:35:52.941924] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2100ff0 (9): Bad file descriptor 00:15:59.729 [2024-12-05 19:35:52.942921] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:15:59.729 [2024-12-05 19:35:52.942945] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:15:59.729 [2024-12-05 19:35:52.942972] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:15:59.729 [2024-12-05 19:35:52.942991] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:15:59.729 request: 00:15:59.729 { 00:15:59.729 "name": "TLSTEST", 00:15:59.729 "trtype": "tcp", 00:15:59.729 "traddr": "10.0.0.3", 00:15:59.729 "adrfam": "ipv4", 00:15:59.729 "trsvcid": "4420", 00:15:59.729 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:59.729 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:15:59.729 "prchk_reftag": false, 00:15:59.729 "prchk_guard": false, 00:15:59.729 "hdgst": false, 00:15:59.729 "ddgst": false, 00:15:59.729 "psk": "key0", 00:15:59.729 "allow_unrecognized_csi": false, 00:15:59.729 "method": "bdev_nvme_attach_controller", 00:15:59.729 "req_id": 1 00:15:59.729 } 00:15:59.729 Got JSON-RPC error response 00:15:59.729 response: 00:15:59.729 { 00:15:59.729 "code": -5, 00:15:59.729 "message": "Input/output error" 00:15:59.729 } 00:15:59.729 19:35:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71574 00:15:59.729 19:35:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71574 ']' 00:15:59.729 19:35:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71574 00:15:59.729 19:35:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:59.729 19:35:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:59.729 19:35:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71574 00:15:59.729 killing process with pid 71574 00:15:59.729 Received shutdown signal, test time was about 10.000000 seconds 00:15:59.729 00:15:59.729 Latency(us) 00:15:59.729 [2024-12-05T19:35:53.166Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:59.729 [2024-12-05T19:35:53.166Z] =================================================================================================================== 00:15:59.729 [2024-12-05T19:35:53.166Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:59.729 19:35:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:15:59.729 19:35:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:15:59.729 19:35:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71574' 00:15:59.729 19:35:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71574 00:15:59.729 19:35:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71574 00:15:59.988 19:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:15:59.988 19:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:15:59.988 19:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:59.988 19:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:59.988 19:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:59.988 19:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.tlacv7ULS5 00:15:59.988 19:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:15:59.988 19:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.tlacv7ULS5 00:15:59.988 19:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:15:59.988 19:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:59.988 19:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:15:59.988 19:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:59.988 19:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.tlacv7ULS5 00:15:59.988 19:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:59.988 19:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:15:59.988 19:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:59.988 19:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.tlacv7ULS5 00:15:59.988 19:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:59.988 19:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:59.988 19:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71611 00:15:59.988 19:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:59.988 19:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71611 /var/tmp/bdevperf.sock 00:15:59.988 19:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71611 ']' 00:15:59.988 19:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:59.988 19:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:59.988 19:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:59.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:59.988 19:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:59.988 19:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:59.988 [2024-12-05 19:35:53.242071] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:15:59.988 [2024-12-05 19:35:53.242384] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71611 ] 00:15:59.988 [2024-12-05 19:35:53.394569] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:00.247 [2024-12-05 19:35:53.450246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:00.247 [2024-12-05 19:35:53.508582] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:01.182 19:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:01.182 19:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:01.182 19:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.tlacv7ULS5 00:16:01.182 19:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:16:01.441 [2024-12-05 19:35:54.780470] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:01.441 [2024-12-05 19:35:54.787858] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:16:01.441 [2024-12-05 19:35:54.788204] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:16:01.441 [2024-12-05 19:35:54.788265] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:01.441 [2024-12-05 19:35:54.788377] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xff1ff0 (107): Transport endpoint is not connected 00:16:01.441 [2024-12-05 19:35:54.789379] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xff1ff0 (9): Bad file descriptor 00:16:01.441 [2024-12-05 19:35:54.790376] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:16:01.441 [2024-12-05 19:35:54.790555] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:16:01.441 [2024-12-05 19:35:54.790588] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:16:01.441 [2024-12-05 19:35:54.790608] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:16:01.441 request: 00:16:01.441 { 00:16:01.441 "name": "TLSTEST", 00:16:01.441 "trtype": "tcp", 00:16:01.441 "traddr": "10.0.0.3", 00:16:01.441 "adrfam": "ipv4", 00:16:01.441 "trsvcid": "4420", 00:16:01.441 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:16:01.441 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:01.441 "prchk_reftag": false, 00:16:01.441 "prchk_guard": false, 00:16:01.441 "hdgst": false, 00:16:01.441 "ddgst": false, 00:16:01.441 "psk": "key0", 00:16:01.441 "allow_unrecognized_csi": false, 00:16:01.441 "method": "bdev_nvme_attach_controller", 00:16:01.441 "req_id": 1 00:16:01.441 } 00:16:01.441 Got JSON-RPC error response 00:16:01.441 response: 00:16:01.441 { 00:16:01.441 "code": -5, 00:16:01.441 "message": "Input/output error" 00:16:01.441 } 00:16:01.441 19:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71611 00:16:01.441 19:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71611 ']' 00:16:01.441 19:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71611 00:16:01.441 19:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:01.441 19:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:01.441 19:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71611 00:16:01.441 19:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:16:01.442 19:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:16:01.442 19:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71611' 00:16:01.442 killing process with pid 71611 00:16:01.442 19:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71611 00:16:01.442 Received shutdown signal, test time was about 10.000000 seconds 00:16:01.442 00:16:01.442 Latency(us) 00:16:01.442 [2024-12-05T19:35:54.879Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:01.442 [2024-12-05T19:35:54.879Z] =================================================================================================================== 00:16:01.442 [2024-12-05T19:35:54.879Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:01.442 19:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71611 00:16:01.700 19:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:16:01.700 19:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:16:01.700 19:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:01.700 19:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:01.700 19:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:01.700 19:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:16:01.700 19:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:16:01.700 19:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:16:01.700 19:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:16:01.700 19:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:01.700 19:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:16:01.700 19:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:01.700 19:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:16:01.700 19:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:01.700 19:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:01.700 19:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:01.700 19:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:16:01.700 19:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:01.700 19:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71641 00:16:01.700 19:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:01.700 19:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:01.700 19:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71641 /var/tmp/bdevperf.sock 00:16:01.700 19:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71641 ']' 00:16:01.700 19:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:01.700 19:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:01.700 19:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:01.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:01.700 19:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:01.700 19:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:01.700 [2024-12-05 19:35:55.095691] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:16:01.701 [2024-12-05 19:35:55.096081] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71641 ] 00:16:01.959 [2024-12-05 19:35:55.232527] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:01.959 [2024-12-05 19:35:55.289606] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:01.959 [2024-12-05 19:35:55.343300] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:02.245 19:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:02.245 19:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:02.245 19:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:16:02.245 [2024-12-05 19:35:55.638979] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:16:02.245 [2024-12-05 19:35:55.639029] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:16:02.245 request: 00:16:02.245 { 00:16:02.245 "name": "key0", 00:16:02.245 "path": "", 00:16:02.245 "method": "keyring_file_add_key", 00:16:02.245 "req_id": 1 00:16:02.245 } 00:16:02.245 Got JSON-RPC error response 00:16:02.245 response: 00:16:02.245 { 00:16:02.245 "code": -1, 00:16:02.245 "message": "Operation not permitted" 00:16:02.245 } 00:16:02.245 19:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:16:02.527 [2024-12-05 19:35:55.883187] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:02.527 [2024-12-05 19:35:55.883630] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:16:02.527 request: 00:16:02.527 { 00:16:02.527 "name": "TLSTEST", 00:16:02.527 "trtype": "tcp", 00:16:02.527 "traddr": "10.0.0.3", 00:16:02.527 "adrfam": "ipv4", 00:16:02.527 "trsvcid": "4420", 00:16:02.527 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:02.527 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:02.527 "prchk_reftag": false, 00:16:02.527 "prchk_guard": false, 00:16:02.527 "hdgst": false, 00:16:02.527 "ddgst": false, 00:16:02.527 "psk": "key0", 00:16:02.527 "allow_unrecognized_csi": false, 00:16:02.527 "method": "bdev_nvme_attach_controller", 00:16:02.527 "req_id": 1 00:16:02.527 } 00:16:02.527 Got JSON-RPC error response 00:16:02.527 response: 00:16:02.527 { 00:16:02.527 "code": -126, 00:16:02.527 "message": "Required key not available" 00:16:02.527 } 00:16:02.528 19:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71641 00:16:02.528 19:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71641 ']' 00:16:02.528 19:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71641 00:16:02.528 19:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:02.528 19:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:02.528 19:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71641 00:16:02.528 19:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:16:02.528 19:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:16:02.528 killing process with pid 71641 00:16:02.528 Received shutdown signal, test time was about 10.000000 seconds 00:16:02.528 00:16:02.528 Latency(us) 00:16:02.528 [2024-12-05T19:35:55.965Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:02.528 [2024-12-05T19:35:55.965Z] =================================================================================================================== 00:16:02.528 [2024-12-05T19:35:55.965Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:02.528 19:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71641' 00:16:02.528 19:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71641 00:16:02.528 19:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71641 00:16:02.786 19:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:16:02.786 19:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:16:02.786 19:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:02.786 19:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:02.786 19:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:02.786 19:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 71179 00:16:02.786 19:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71179 ']' 00:16:02.786 19:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71179 00:16:02.786 19:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:02.786 19:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:02.786 19:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71179 00:16:02.786 killing process with pid 71179 00:16:02.786 19:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:02.786 19:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:02.786 19:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71179' 00:16:02.786 19:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71179 00:16:02.786 19:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71179 00:16:03.045 19:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:16:03.045 19:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:16:03.045 19:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:16:03.045 19:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:16:03.045 19:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:16:03.045 19:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:16:03.045 19:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:16:03.045 19:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:16:03.045 19:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:16:03.046 19:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.BrItUbweve 00:16:03.046 19:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:16:03.046 19:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.BrItUbweve 00:16:03.046 19:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:16:03.046 19:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:03.046 19:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:03.046 19:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:03.046 19:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71672 00:16:03.046 19:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:03.046 19:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71672 00:16:03.046 19:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71672 ']' 00:16:03.046 19:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:03.046 19:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:03.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:03.046 19:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:03.046 19:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:03.046 19:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:03.304 [2024-12-05 19:35:56.494053] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:16:03.304 [2024-12-05 19:35:56.494191] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:03.304 [2024-12-05 19:35:56.641484] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:03.304 [2024-12-05 19:35:56.694172] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:03.304 [2024-12-05 19:35:56.694237] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:03.304 [2024-12-05 19:35:56.694264] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:03.304 [2024-12-05 19:35:56.694272] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:03.304 [2024-12-05 19:35:56.694279] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:03.304 [2024-12-05 19:35:56.694613] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:03.563 [2024-12-05 19:35:56.746735] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:04.132 19:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:04.132 19:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:04.132 19:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:04.132 19:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:04.132 19:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:04.132 19:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:04.132 19:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.BrItUbweve 00:16:04.132 19:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.BrItUbweve 00:16:04.132 19:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:04.390 [2024-12-05 19:35:57.771274] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:04.390 19:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:04.649 19:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:16:04.907 [2024-12-05 19:35:58.231348] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:04.907 [2024-12-05 19:35:58.231597] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:04.907 19:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:05.164 malloc0 00:16:05.164 19:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:05.422 19:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.BrItUbweve 00:16:05.680 19:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:16:05.938 19:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.BrItUbweve 00:16:05.938 19:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:05.938 19:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:05.938 19:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:05.938 19:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.BrItUbweve 00:16:05.938 19:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:05.938 19:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:05.938 19:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71733 00:16:05.938 19:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:05.938 19:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71733 /var/tmp/bdevperf.sock 00:16:05.938 19:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71733 ']' 00:16:05.938 19:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:05.938 19:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:05.938 19:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:05.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:05.938 19:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:05.938 19:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:05.938 [2024-12-05 19:35:59.235097] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:16:05.938 [2024-12-05 19:35:59.235229] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71733 ] 00:16:06.197 [2024-12-05 19:35:59.382109] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:06.197 [2024-12-05 19:35:59.443462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:06.197 [2024-12-05 19:35:59.500747] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:06.197 19:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:06.197 19:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:06.197 19:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.BrItUbweve 00:16:06.455 19:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:16:06.714 [2024-12-05 19:36:00.065740] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:06.714 TLSTESTn1 00:16:06.973 19:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:16:06.973 Running I/O for 10 seconds... 00:16:08.946 4420.00 IOPS, 17.27 MiB/s [2024-12-05T19:36:03.320Z] 4589.50 IOPS, 17.93 MiB/s [2024-12-05T19:36:04.697Z] 4583.67 IOPS, 17.90 MiB/s [2024-12-05T19:36:05.274Z] 4573.25 IOPS, 17.86 MiB/s [2024-12-05T19:36:06.650Z] 4567.80 IOPS, 17.84 MiB/s [2024-12-05T19:36:07.588Z] 4558.50 IOPS, 17.81 MiB/s [2024-12-05T19:36:08.525Z] 4557.43 IOPS, 17.80 MiB/s [2024-12-05T19:36:09.462Z] 4555.00 IOPS, 17.79 MiB/s [2024-12-05T19:36:10.417Z] 4550.00 IOPS, 17.77 MiB/s [2024-12-05T19:36:10.417Z] 4546.10 IOPS, 17.76 MiB/s 00:16:16.980 Latency(us) 00:16:16.980 [2024-12-05T19:36:10.417Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:16.980 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:16.980 Verification LBA range: start 0x0 length 0x2000 00:16:16.980 TLSTESTn1 : 10.02 4551.35 17.78 0.00 0.00 28071.68 6106.76 20733.21 00:16:16.980 [2024-12-05T19:36:10.417Z] =================================================================================================================== 00:16:16.980 [2024-12-05T19:36:10.417Z] Total : 4551.35 17.78 0.00 0.00 28071.68 6106.76 20733.21 00:16:16.980 { 00:16:16.980 "results": [ 00:16:16.980 { 00:16:16.980 "job": "TLSTESTn1", 00:16:16.980 "core_mask": "0x4", 00:16:16.980 "workload": "verify", 00:16:16.980 "status": "finished", 00:16:16.980 "verify_range": { 00:16:16.980 "start": 0, 00:16:16.980 "length": 8192 00:16:16.980 }, 00:16:16.980 "queue_depth": 128, 00:16:16.980 "io_size": 4096, 00:16:16.980 "runtime": 10.015921, 00:16:16.980 "iops": 4551.353789631527, 00:16:16.980 "mibps": 17.778725740748154, 00:16:16.980 "io_failed": 0, 00:16:16.980 "io_timeout": 0, 00:16:16.980 "avg_latency_us": 28071.68277214296, 00:16:16.980 "min_latency_us": 6106.763636363637, 00:16:16.980 "max_latency_us": 20733.20727272727 00:16:16.980 } 00:16:16.980 ], 00:16:16.980 "core_count": 1 00:16:16.980 } 00:16:16.980 19:36:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:16.980 19:36:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 71733 00:16:16.980 19:36:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71733 ']' 00:16:16.980 19:36:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71733 00:16:16.980 19:36:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:16.980 19:36:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:16.980 19:36:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71733 00:16:16.980 19:36:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:16:16.980 killing process with pid 71733 00:16:16.980 Received shutdown signal, test time was about 10.000000 seconds 00:16:16.980 00:16:16.980 Latency(us) 00:16:16.980 [2024-12-05T19:36:10.417Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:16.980 [2024-12-05T19:36:10.418Z] =================================================================================================================== 00:16:16.981 [2024-12-05T19:36:10.418Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:16.981 19:36:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:16:16.981 19:36:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71733' 00:16:16.981 19:36:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71733 00:16:16.981 19:36:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71733 00:16:17.238 19:36:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.BrItUbweve 00:16:17.239 19:36:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.BrItUbweve 00:16:17.239 19:36:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:16:17.239 19:36:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.BrItUbweve 00:16:17.239 19:36:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:16:17.239 19:36:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:17.239 19:36:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:16:17.239 19:36:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:17.239 19:36:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.BrItUbweve 00:16:17.239 19:36:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:17.239 19:36:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:17.239 19:36:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:17.239 19:36:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.BrItUbweve 00:16:17.239 19:36:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:17.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:17.239 19:36:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71861 00:16:17.239 19:36:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:17.239 19:36:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:17.239 19:36:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71861 /var/tmp/bdevperf.sock 00:16:17.239 19:36:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71861 ']' 00:16:17.239 19:36:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:17.239 19:36:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:17.239 19:36:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:17.239 19:36:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:17.239 19:36:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:17.239 [2024-12-05 19:36:10.580462] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:16:17.239 [2024-12-05 19:36:10.580578] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71861 ] 00:16:17.497 [2024-12-05 19:36:10.719710] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:17.497 [2024-12-05 19:36:10.773257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:17.497 [2024-12-05 19:36:10.829561] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:17.497 19:36:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:17.497 19:36:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:17.497 19:36:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.BrItUbweve 00:16:17.755 [2024-12-05 19:36:11.153216] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.BrItUbweve': 0100666 00:16:17.755 [2024-12-05 19:36:11.153299] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:16:17.755 request: 00:16:17.755 { 00:16:17.755 "name": "key0", 00:16:17.755 "path": "/tmp/tmp.BrItUbweve", 00:16:17.755 "method": "keyring_file_add_key", 00:16:17.755 "req_id": 1 00:16:17.755 } 00:16:17.755 Got JSON-RPC error response 00:16:17.755 response: 00:16:17.755 { 00:16:17.755 "code": -1, 00:16:17.755 "message": "Operation not permitted" 00:16:17.755 } 00:16:17.755 19:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:16:18.013 [2024-12-05 19:36:11.441437] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:18.013 [2024-12-05 19:36:11.441515] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:16:18.013 request: 00:16:18.013 { 00:16:18.013 "name": "TLSTEST", 00:16:18.013 "trtype": "tcp", 00:16:18.013 "traddr": "10.0.0.3", 00:16:18.013 "adrfam": "ipv4", 00:16:18.013 "trsvcid": "4420", 00:16:18.013 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:18.013 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:18.013 "prchk_reftag": false, 00:16:18.013 "prchk_guard": false, 00:16:18.013 "hdgst": false, 00:16:18.013 "ddgst": false, 00:16:18.013 "psk": "key0", 00:16:18.013 "allow_unrecognized_csi": false, 00:16:18.013 "method": "bdev_nvme_attach_controller", 00:16:18.013 "req_id": 1 00:16:18.013 } 00:16:18.013 Got JSON-RPC error response 00:16:18.013 response: 00:16:18.013 { 00:16:18.013 "code": -126, 00:16:18.013 "message": "Required key not available" 00:16:18.013 } 00:16:18.272 19:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71861 00:16:18.272 19:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71861 ']' 00:16:18.272 19:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71861 00:16:18.272 19:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:18.272 19:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:18.272 19:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71861 00:16:18.272 killing process with pid 71861 00:16:18.272 Received shutdown signal, test time was about 10.000000 seconds 00:16:18.272 00:16:18.272 Latency(us) 00:16:18.272 [2024-12-05T19:36:11.709Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:18.272 [2024-12-05T19:36:11.709Z] =================================================================================================================== 00:16:18.272 [2024-12-05T19:36:11.709Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:18.272 19:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:16:18.272 19:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:16:18.272 19:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71861' 00:16:18.272 19:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71861 00:16:18.272 19:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71861 00:16:18.272 19:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:16:18.272 19:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:16:18.272 19:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:18.272 19:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:18.272 19:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:18.272 19:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 71672 00:16:18.272 19:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71672 ']' 00:16:18.272 19:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71672 00:16:18.272 19:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:18.272 19:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:18.272 19:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71672 00:16:18.531 killing process with pid 71672 00:16:18.531 19:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:18.531 19:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:18.531 19:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71672' 00:16:18.531 19:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71672 00:16:18.531 19:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71672 00:16:18.531 19:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:16:18.531 19:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:18.531 19:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:18.531 19:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:18.531 19:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71891 00:16:18.532 19:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:18.532 19:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71891 00:16:18.532 19:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71891 ']' 00:16:18.532 19:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:18.532 19:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:18.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:18.532 19:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:18.532 19:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:18.532 19:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:18.791 [2024-12-05 19:36:11.993524] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:16:18.791 [2024-12-05 19:36:11.993648] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:18.791 [2024-12-05 19:36:12.135536] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:18.791 [2024-12-05 19:36:12.191876] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:18.791 [2024-12-05 19:36:12.191931] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:18.791 [2024-12-05 19:36:12.191957] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:18.791 [2024-12-05 19:36:12.191965] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:18.791 [2024-12-05 19:36:12.191972] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:18.791 [2024-12-05 19:36:12.192401] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:19.050 [2024-12-05 19:36:12.245132] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:19.617 19:36:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:19.618 19:36:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:19.618 19:36:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:19.618 19:36:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:19.618 19:36:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:19.618 19:36:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:19.618 19:36:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.BrItUbweve 00:16:19.618 19:36:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:16:19.618 19:36:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.BrItUbweve 00:16:19.618 19:36:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:16:19.618 19:36:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:19.618 19:36:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:16:19.618 19:36:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:19.618 19:36:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.BrItUbweve 00:16:19.618 19:36:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.BrItUbweve 00:16:19.618 19:36:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:19.876 [2024-12-05 19:36:13.281148] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:19.876 19:36:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:20.136 19:36:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:16:20.395 [2024-12-05 19:36:13.821298] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:20.395 [2024-12-05 19:36:13.821592] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:20.655 19:36:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:20.914 malloc0 00:16:20.914 19:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:21.173 19:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.BrItUbweve 00:16:21.173 [2024-12-05 19:36:14.572497] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.BrItUbweve': 0100666 00:16:21.173 [2024-12-05 19:36:14.572563] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:16:21.173 request: 00:16:21.173 { 00:16:21.173 "name": "key0", 00:16:21.173 "path": "/tmp/tmp.BrItUbweve", 00:16:21.173 "method": "keyring_file_add_key", 00:16:21.173 "req_id": 1 00:16:21.173 } 00:16:21.173 Got JSON-RPC error response 00:16:21.173 response: 00:16:21.173 { 00:16:21.173 "code": -1, 00:16:21.173 "message": "Operation not permitted" 00:16:21.173 } 00:16:21.173 19:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:16:21.433 [2024-12-05 19:36:14.808601] tcp.c:3777:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:16:21.433 [2024-12-05 19:36:14.808698] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:16:21.433 request: 00:16:21.433 { 00:16:21.433 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:21.433 "host": "nqn.2016-06.io.spdk:host1", 00:16:21.433 "psk": "key0", 00:16:21.433 "method": "nvmf_subsystem_add_host", 00:16:21.433 "req_id": 1 00:16:21.433 } 00:16:21.433 Got JSON-RPC error response 00:16:21.433 response: 00:16:21.433 { 00:16:21.433 "code": -32603, 00:16:21.433 "message": "Internal error" 00:16:21.433 } 00:16:21.433 19:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:16:21.433 19:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:21.433 19:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:21.433 19:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:21.433 19:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 71891 00:16:21.433 19:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71891 ']' 00:16:21.433 19:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71891 00:16:21.433 19:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:21.433 19:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:21.433 19:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71891 00:16:21.433 19:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:21.433 19:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:21.433 killing process with pid 71891 00:16:21.433 19:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71891' 00:16:21.433 19:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71891 00:16:21.433 19:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71891 00:16:21.692 19:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.BrItUbweve 00:16:21.692 19:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:16:21.692 19:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:21.692 19:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:21.692 19:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:21.692 19:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71960 00:16:21.692 19:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:21.692 19:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71960 00:16:21.692 19:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71960 ']' 00:16:21.692 19:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:21.692 19:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:21.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:21.692 19:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:21.692 19:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:21.692 19:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:21.950 [2024-12-05 19:36:15.137807] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:16:21.950 [2024-12-05 19:36:15.137923] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:21.950 [2024-12-05 19:36:15.284597] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:21.950 [2024-12-05 19:36:15.334051] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:21.950 [2024-12-05 19:36:15.334148] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:21.950 [2024-12-05 19:36:15.334160] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:21.950 [2024-12-05 19:36:15.334168] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:21.950 [2024-12-05 19:36:15.334175] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:21.950 [2024-12-05 19:36:15.334609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:22.207 [2024-12-05 19:36:15.391093] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:22.773 19:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:22.774 19:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:22.774 19:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:22.774 19:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:22.774 19:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:22.774 19:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:22.774 19:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.BrItUbweve 00:16:22.774 19:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.BrItUbweve 00:16:22.774 19:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:23.032 [2024-12-05 19:36:16.391926] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:23.032 19:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:23.291 19:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:16:23.549 [2024-12-05 19:36:16.868082] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:23.549 [2024-12-05 19:36:16.868365] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:23.549 19:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:23.808 malloc0 00:16:23.808 19:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:24.067 19:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.BrItUbweve 00:16:24.326 19:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:16:24.585 19:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=72016 00:16:24.585 19:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:24.585 19:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:24.585 19:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 72016 /var/tmp/bdevperf.sock 00:16:24.585 19:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72016 ']' 00:16:24.585 19:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:24.585 19:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:24.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:24.585 19:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:24.585 19:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:24.585 19:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:24.585 [2024-12-05 19:36:17.906404] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:16:24.585 [2024-12-05 19:36:17.906516] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72016 ] 00:16:24.845 [2024-12-05 19:36:18.055875] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:24.845 [2024-12-05 19:36:18.126190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:24.845 [2024-12-05 19:36:18.184404] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:25.521 19:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:25.521 19:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:25.521 19:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.BrItUbweve 00:16:25.780 19:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:16:26.039 [2024-12-05 19:36:19.412817] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:26.298 TLSTESTn1 00:16:26.298 19:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:16:26.558 19:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:16:26.558 "subsystems": [ 00:16:26.558 { 00:16:26.558 "subsystem": "keyring", 00:16:26.558 "config": [ 00:16:26.558 { 00:16:26.558 "method": "keyring_file_add_key", 00:16:26.558 "params": { 00:16:26.558 "name": "key0", 00:16:26.558 "path": "/tmp/tmp.BrItUbweve" 00:16:26.558 } 00:16:26.558 } 00:16:26.558 ] 00:16:26.558 }, 00:16:26.558 { 00:16:26.558 "subsystem": "iobuf", 00:16:26.558 "config": [ 00:16:26.558 { 00:16:26.558 "method": "iobuf_set_options", 00:16:26.558 "params": { 00:16:26.558 "small_pool_count": 8192, 00:16:26.558 "large_pool_count": 1024, 00:16:26.558 "small_bufsize": 8192, 00:16:26.558 "large_bufsize": 135168, 00:16:26.558 "enable_numa": false 00:16:26.558 } 00:16:26.558 } 00:16:26.558 ] 00:16:26.558 }, 00:16:26.558 { 00:16:26.558 "subsystem": "sock", 00:16:26.558 "config": [ 00:16:26.558 { 00:16:26.558 "method": "sock_set_default_impl", 00:16:26.558 "params": { 00:16:26.558 "impl_name": "uring" 00:16:26.558 } 00:16:26.558 }, 00:16:26.558 { 00:16:26.558 "method": "sock_impl_set_options", 00:16:26.558 "params": { 00:16:26.558 "impl_name": "ssl", 00:16:26.558 "recv_buf_size": 4096, 00:16:26.558 "send_buf_size": 4096, 00:16:26.558 "enable_recv_pipe": true, 00:16:26.558 "enable_quickack": false, 00:16:26.558 "enable_placement_id": 0, 00:16:26.558 "enable_zerocopy_send_server": true, 00:16:26.558 "enable_zerocopy_send_client": false, 00:16:26.558 "zerocopy_threshold": 0, 00:16:26.558 "tls_version": 0, 00:16:26.558 "enable_ktls": false 00:16:26.558 } 00:16:26.558 }, 00:16:26.558 { 00:16:26.558 "method": "sock_impl_set_options", 00:16:26.558 "params": { 00:16:26.558 "impl_name": "posix", 00:16:26.558 "recv_buf_size": 2097152, 00:16:26.558 "send_buf_size": 2097152, 00:16:26.558 "enable_recv_pipe": true, 00:16:26.558 "enable_quickack": false, 00:16:26.558 "enable_placement_id": 0, 00:16:26.558 "enable_zerocopy_send_server": true, 00:16:26.558 "enable_zerocopy_send_client": false, 00:16:26.558 "zerocopy_threshold": 0, 00:16:26.558 "tls_version": 0, 00:16:26.558 "enable_ktls": false 00:16:26.558 } 00:16:26.558 }, 00:16:26.558 { 00:16:26.558 "method": "sock_impl_set_options", 00:16:26.558 "params": { 00:16:26.558 "impl_name": "uring", 00:16:26.558 "recv_buf_size": 2097152, 00:16:26.558 "send_buf_size": 2097152, 00:16:26.558 "enable_recv_pipe": true, 00:16:26.558 "enable_quickack": false, 00:16:26.558 "enable_placement_id": 0, 00:16:26.558 "enable_zerocopy_send_server": false, 00:16:26.558 "enable_zerocopy_send_client": false, 00:16:26.558 "zerocopy_threshold": 0, 00:16:26.558 "tls_version": 0, 00:16:26.558 "enable_ktls": false 00:16:26.558 } 00:16:26.558 } 00:16:26.558 ] 00:16:26.558 }, 00:16:26.558 { 00:16:26.558 "subsystem": "vmd", 00:16:26.558 "config": [] 00:16:26.558 }, 00:16:26.558 { 00:16:26.558 "subsystem": "accel", 00:16:26.558 "config": [ 00:16:26.558 { 00:16:26.558 "method": "accel_set_options", 00:16:26.558 "params": { 00:16:26.558 "small_cache_size": 128, 00:16:26.558 "large_cache_size": 16, 00:16:26.558 "task_count": 2048, 00:16:26.558 "sequence_count": 2048, 00:16:26.558 "buf_count": 2048 00:16:26.558 } 00:16:26.558 } 00:16:26.558 ] 00:16:26.558 }, 00:16:26.558 { 00:16:26.558 "subsystem": "bdev", 00:16:26.558 "config": [ 00:16:26.558 { 00:16:26.558 "method": "bdev_set_options", 00:16:26.558 "params": { 00:16:26.558 "bdev_io_pool_size": 65535, 00:16:26.558 "bdev_io_cache_size": 256, 00:16:26.558 "bdev_auto_examine": true, 00:16:26.558 "iobuf_small_cache_size": 128, 00:16:26.558 "iobuf_large_cache_size": 16 00:16:26.558 } 00:16:26.558 }, 00:16:26.558 { 00:16:26.558 "method": "bdev_raid_set_options", 00:16:26.558 "params": { 00:16:26.558 "process_window_size_kb": 1024, 00:16:26.558 "process_max_bandwidth_mb_sec": 0 00:16:26.558 } 00:16:26.558 }, 00:16:26.558 { 00:16:26.558 "method": "bdev_iscsi_set_options", 00:16:26.558 "params": { 00:16:26.558 "timeout_sec": 30 00:16:26.558 } 00:16:26.558 }, 00:16:26.558 { 00:16:26.558 "method": "bdev_nvme_set_options", 00:16:26.558 "params": { 00:16:26.558 "action_on_timeout": "none", 00:16:26.558 "timeout_us": 0, 00:16:26.558 "timeout_admin_us": 0, 00:16:26.558 "keep_alive_timeout_ms": 10000, 00:16:26.558 "arbitration_burst": 0, 00:16:26.558 "low_priority_weight": 0, 00:16:26.558 "medium_priority_weight": 0, 00:16:26.558 "high_priority_weight": 0, 00:16:26.559 "nvme_adminq_poll_period_us": 10000, 00:16:26.559 "nvme_ioq_poll_period_us": 0, 00:16:26.559 "io_queue_requests": 0, 00:16:26.559 "delay_cmd_submit": true, 00:16:26.559 "transport_retry_count": 4, 00:16:26.559 "bdev_retry_count": 3, 00:16:26.559 "transport_ack_timeout": 0, 00:16:26.559 "ctrlr_loss_timeout_sec": 0, 00:16:26.559 "reconnect_delay_sec": 0, 00:16:26.559 "fast_io_fail_timeout_sec": 0, 00:16:26.559 "disable_auto_failback": false, 00:16:26.559 "generate_uuids": false, 00:16:26.559 "transport_tos": 0, 00:16:26.559 "nvme_error_stat": false, 00:16:26.559 "rdma_srq_size": 0, 00:16:26.559 "io_path_stat": false, 00:16:26.559 "allow_accel_sequence": false, 00:16:26.559 "rdma_max_cq_size": 0, 00:16:26.559 "rdma_cm_event_timeout_ms": 0, 00:16:26.559 "dhchap_digests": [ 00:16:26.559 "sha256", 00:16:26.559 "sha384", 00:16:26.559 "sha512" 00:16:26.559 ], 00:16:26.559 "dhchap_dhgroups": [ 00:16:26.559 "null", 00:16:26.559 "ffdhe2048", 00:16:26.559 "ffdhe3072", 00:16:26.559 "ffdhe4096", 00:16:26.559 "ffdhe6144", 00:16:26.559 "ffdhe8192" 00:16:26.559 ] 00:16:26.559 } 00:16:26.559 }, 00:16:26.559 { 00:16:26.559 "method": "bdev_nvme_set_hotplug", 00:16:26.559 "params": { 00:16:26.559 "period_us": 100000, 00:16:26.559 "enable": false 00:16:26.559 } 00:16:26.559 }, 00:16:26.559 { 00:16:26.559 "method": "bdev_malloc_create", 00:16:26.559 "params": { 00:16:26.559 "name": "malloc0", 00:16:26.559 "num_blocks": 8192, 00:16:26.559 "block_size": 4096, 00:16:26.559 "physical_block_size": 4096, 00:16:26.559 "uuid": "d38663ae-0e70-40fb-b76c-2991d5edd0fa", 00:16:26.559 "optimal_io_boundary": 0, 00:16:26.559 "md_size": 0, 00:16:26.559 "dif_type": 0, 00:16:26.559 "dif_is_head_of_md": false, 00:16:26.559 "dif_pi_format": 0 00:16:26.559 } 00:16:26.559 }, 00:16:26.559 { 00:16:26.559 "method": "bdev_wait_for_examine" 00:16:26.559 } 00:16:26.559 ] 00:16:26.559 }, 00:16:26.559 { 00:16:26.559 "subsystem": "nbd", 00:16:26.559 "config": [] 00:16:26.559 }, 00:16:26.559 { 00:16:26.559 "subsystem": "scheduler", 00:16:26.559 "config": [ 00:16:26.559 { 00:16:26.559 "method": "framework_set_scheduler", 00:16:26.559 "params": { 00:16:26.559 "name": "static" 00:16:26.559 } 00:16:26.559 } 00:16:26.559 ] 00:16:26.559 }, 00:16:26.559 { 00:16:26.559 "subsystem": "nvmf", 00:16:26.559 "config": [ 00:16:26.559 { 00:16:26.559 "method": "nvmf_set_config", 00:16:26.559 "params": { 00:16:26.559 "discovery_filter": "match_any", 00:16:26.559 "admin_cmd_passthru": { 00:16:26.559 "identify_ctrlr": false 00:16:26.559 }, 00:16:26.559 "dhchap_digests": [ 00:16:26.559 "sha256", 00:16:26.559 "sha384", 00:16:26.559 "sha512" 00:16:26.559 ], 00:16:26.559 "dhchap_dhgroups": [ 00:16:26.559 "null", 00:16:26.559 "ffdhe2048", 00:16:26.559 "ffdhe3072", 00:16:26.559 "ffdhe4096", 00:16:26.559 "ffdhe6144", 00:16:26.559 "ffdhe8192" 00:16:26.559 ] 00:16:26.559 } 00:16:26.559 }, 00:16:26.559 { 00:16:26.559 "method": "nvmf_set_max_subsystems", 00:16:26.559 "params": { 00:16:26.559 "max_subsystems": 1024 00:16:26.559 } 00:16:26.559 }, 00:16:26.559 { 00:16:26.559 "method": "nvmf_set_crdt", 00:16:26.559 "params": { 00:16:26.559 "crdt1": 0, 00:16:26.559 "crdt2": 0, 00:16:26.559 "crdt3": 0 00:16:26.559 } 00:16:26.559 }, 00:16:26.559 { 00:16:26.559 "method": "nvmf_create_transport", 00:16:26.559 "params": { 00:16:26.559 "trtype": "TCP", 00:16:26.559 "max_queue_depth": 128, 00:16:26.559 "max_io_qpairs_per_ctrlr": 127, 00:16:26.559 "in_capsule_data_size": 4096, 00:16:26.559 "max_io_size": 131072, 00:16:26.559 "io_unit_size": 131072, 00:16:26.559 "max_aq_depth": 128, 00:16:26.559 "num_shared_buffers": 511, 00:16:26.559 "buf_cache_size": 4294967295, 00:16:26.559 "dif_insert_or_strip": false, 00:16:26.559 "zcopy": false, 00:16:26.559 "c2h_success": false, 00:16:26.559 "sock_priority": 0, 00:16:26.559 "abort_timeout_sec": 1, 00:16:26.559 "ack_timeout": 0, 00:16:26.559 "data_wr_pool_size": 0 00:16:26.559 } 00:16:26.559 }, 00:16:26.559 { 00:16:26.559 "method": "nvmf_create_subsystem", 00:16:26.559 "params": { 00:16:26.559 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:26.559 "allow_any_host": false, 00:16:26.559 "serial_number": "SPDK00000000000001", 00:16:26.559 "model_number": "SPDK bdev Controller", 00:16:26.559 "max_namespaces": 10, 00:16:26.559 "min_cntlid": 1, 00:16:26.559 "max_cntlid": 65519, 00:16:26.559 "ana_reporting": false 00:16:26.559 } 00:16:26.559 }, 00:16:26.559 { 00:16:26.559 "method": "nvmf_subsystem_add_host", 00:16:26.559 "params": { 00:16:26.559 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:26.559 "host": "nqn.2016-06.io.spdk:host1", 00:16:26.559 "psk": "key0" 00:16:26.559 } 00:16:26.559 }, 00:16:26.559 { 00:16:26.559 "method": "nvmf_subsystem_add_ns", 00:16:26.559 "params": { 00:16:26.559 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:26.559 "namespace": { 00:16:26.559 "nsid": 1, 00:16:26.559 "bdev_name": "malloc0", 00:16:26.559 "nguid": "D38663AE0E7040FBB76C2991D5EDD0FA", 00:16:26.559 "uuid": "d38663ae-0e70-40fb-b76c-2991d5edd0fa", 00:16:26.559 "no_auto_visible": false 00:16:26.559 } 00:16:26.559 } 00:16:26.559 }, 00:16:26.559 { 00:16:26.559 "method": "nvmf_subsystem_add_listener", 00:16:26.559 "params": { 00:16:26.559 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:26.559 "listen_address": { 00:16:26.559 "trtype": "TCP", 00:16:26.559 "adrfam": "IPv4", 00:16:26.559 "traddr": "10.0.0.3", 00:16:26.559 "trsvcid": "4420" 00:16:26.559 }, 00:16:26.559 "secure_channel": true 00:16:26.559 } 00:16:26.559 } 00:16:26.559 ] 00:16:26.559 } 00:16:26.559 ] 00:16:26.559 }' 00:16:26.559 19:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:16:26.818 19:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:16:26.818 "subsystems": [ 00:16:26.818 { 00:16:26.818 "subsystem": "keyring", 00:16:26.818 "config": [ 00:16:26.818 { 00:16:26.818 "method": "keyring_file_add_key", 00:16:26.818 "params": { 00:16:26.818 "name": "key0", 00:16:26.818 "path": "/tmp/tmp.BrItUbweve" 00:16:26.818 } 00:16:26.818 } 00:16:26.818 ] 00:16:26.818 }, 00:16:26.818 { 00:16:26.818 "subsystem": "iobuf", 00:16:26.818 "config": [ 00:16:26.818 { 00:16:26.818 "method": "iobuf_set_options", 00:16:26.818 "params": { 00:16:26.818 "small_pool_count": 8192, 00:16:26.818 "large_pool_count": 1024, 00:16:26.818 "small_bufsize": 8192, 00:16:26.818 "large_bufsize": 135168, 00:16:26.818 "enable_numa": false 00:16:26.818 } 00:16:26.818 } 00:16:26.818 ] 00:16:26.818 }, 00:16:26.818 { 00:16:26.818 "subsystem": "sock", 00:16:26.818 "config": [ 00:16:26.818 { 00:16:26.818 "method": "sock_set_default_impl", 00:16:26.818 "params": { 00:16:26.818 "impl_name": "uring" 00:16:26.818 } 00:16:26.818 }, 00:16:26.818 { 00:16:26.818 "method": "sock_impl_set_options", 00:16:26.818 "params": { 00:16:26.818 "impl_name": "ssl", 00:16:26.818 "recv_buf_size": 4096, 00:16:26.818 "send_buf_size": 4096, 00:16:26.818 "enable_recv_pipe": true, 00:16:26.818 "enable_quickack": false, 00:16:26.818 "enable_placement_id": 0, 00:16:26.818 "enable_zerocopy_send_server": true, 00:16:26.818 "enable_zerocopy_send_client": false, 00:16:26.818 "zerocopy_threshold": 0, 00:16:26.818 "tls_version": 0, 00:16:26.818 "enable_ktls": false 00:16:26.818 } 00:16:26.818 }, 00:16:26.818 { 00:16:26.818 "method": "sock_impl_set_options", 00:16:26.818 "params": { 00:16:26.818 "impl_name": "posix", 00:16:26.818 "recv_buf_size": 2097152, 00:16:26.818 "send_buf_size": 2097152, 00:16:26.818 "enable_recv_pipe": true, 00:16:26.818 "enable_quickack": false, 00:16:26.818 "enable_placement_id": 0, 00:16:26.818 "enable_zerocopy_send_server": true, 00:16:26.818 "enable_zerocopy_send_client": false, 00:16:26.818 "zerocopy_threshold": 0, 00:16:26.818 "tls_version": 0, 00:16:26.818 "enable_ktls": false 00:16:26.818 } 00:16:26.818 }, 00:16:26.818 { 00:16:26.818 "method": "sock_impl_set_options", 00:16:26.818 "params": { 00:16:26.818 "impl_name": "uring", 00:16:26.818 "recv_buf_size": 2097152, 00:16:26.818 "send_buf_size": 2097152, 00:16:26.818 "enable_recv_pipe": true, 00:16:26.818 "enable_quickack": false, 00:16:26.818 "enable_placement_id": 0, 00:16:26.818 "enable_zerocopy_send_server": false, 00:16:26.818 "enable_zerocopy_send_client": false, 00:16:26.818 "zerocopy_threshold": 0, 00:16:26.818 "tls_version": 0, 00:16:26.818 "enable_ktls": false 00:16:26.818 } 00:16:26.818 } 00:16:26.818 ] 00:16:26.818 }, 00:16:26.818 { 00:16:26.818 "subsystem": "vmd", 00:16:26.818 "config": [] 00:16:26.818 }, 00:16:26.818 { 00:16:26.818 "subsystem": "accel", 00:16:26.818 "config": [ 00:16:26.818 { 00:16:26.818 "method": "accel_set_options", 00:16:26.818 "params": { 00:16:26.818 "small_cache_size": 128, 00:16:26.818 "large_cache_size": 16, 00:16:26.819 "task_count": 2048, 00:16:26.819 "sequence_count": 2048, 00:16:26.819 "buf_count": 2048 00:16:26.819 } 00:16:26.819 } 00:16:26.819 ] 00:16:26.819 }, 00:16:26.819 { 00:16:26.819 "subsystem": "bdev", 00:16:26.819 "config": [ 00:16:26.819 { 00:16:26.819 "method": "bdev_set_options", 00:16:26.819 "params": { 00:16:26.819 "bdev_io_pool_size": 65535, 00:16:26.819 "bdev_io_cache_size": 256, 00:16:26.819 "bdev_auto_examine": true, 00:16:26.819 "iobuf_small_cache_size": 128, 00:16:26.819 "iobuf_large_cache_size": 16 00:16:26.819 } 00:16:26.819 }, 00:16:26.819 { 00:16:26.819 "method": "bdev_raid_set_options", 00:16:26.819 "params": { 00:16:26.819 "process_window_size_kb": 1024, 00:16:26.819 "process_max_bandwidth_mb_sec": 0 00:16:26.819 } 00:16:26.819 }, 00:16:26.819 { 00:16:26.819 "method": "bdev_iscsi_set_options", 00:16:26.819 "params": { 00:16:26.819 "timeout_sec": 30 00:16:26.819 } 00:16:26.819 }, 00:16:26.819 { 00:16:26.819 "method": "bdev_nvme_set_options", 00:16:26.819 "params": { 00:16:26.819 "action_on_timeout": "none", 00:16:26.819 "timeout_us": 0, 00:16:26.819 "timeout_admin_us": 0, 00:16:26.819 "keep_alive_timeout_ms": 10000, 00:16:26.819 "arbitration_burst": 0, 00:16:26.819 "low_priority_weight": 0, 00:16:26.819 "medium_priority_weight": 0, 00:16:26.819 "high_priority_weight": 0, 00:16:26.819 "nvme_adminq_poll_period_us": 10000, 00:16:26.819 "nvme_ioq_poll_period_us": 0, 00:16:26.819 "io_queue_requests": 512, 00:16:26.819 "delay_cmd_submit": true, 00:16:26.819 "transport_retry_count": 4, 00:16:26.819 "bdev_retry_count": 3, 00:16:26.819 "transport_ack_timeout": 0, 00:16:26.819 "ctrlr_loss_timeout_sec": 0, 00:16:26.819 "reconnect_delay_sec": 0, 00:16:26.819 "fast_io_fail_timeout_sec": 0, 00:16:26.819 "disable_auto_failback": false, 00:16:26.819 "generate_uuids": false, 00:16:26.819 "transport_tos": 0, 00:16:26.819 "nvme_error_stat": false, 00:16:26.819 "rdma_srq_size": 0, 00:16:26.819 "io_path_stat": false, 00:16:26.819 "allow_accel_sequence": false, 00:16:26.819 "rdma_max_cq_size": 0, 00:16:26.819 "rdma_cm_event_timeout_ms": 0, 00:16:26.819 "dhchap_digests": [ 00:16:26.819 "sha256", 00:16:26.819 "sha384", 00:16:26.819 "sha512" 00:16:26.819 ], 00:16:26.819 "dhchap_dhgroups": [ 00:16:26.819 "null", 00:16:26.819 "ffdhe2048", 00:16:26.819 "ffdhe3072", 00:16:26.819 "ffdhe4096", 00:16:26.819 "ffdhe6144", 00:16:26.819 "ffdhe8192" 00:16:26.819 ] 00:16:26.819 } 00:16:26.819 }, 00:16:26.819 { 00:16:26.819 "method": "bdev_nvme_attach_controller", 00:16:26.819 "params": { 00:16:26.819 "name": "TLSTEST", 00:16:26.819 "trtype": "TCP", 00:16:26.819 "adrfam": "IPv4", 00:16:26.819 "traddr": "10.0.0.3", 00:16:26.819 "trsvcid": "4420", 00:16:26.819 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:26.819 "prchk_reftag": false, 00:16:26.819 "prchk_guard": false, 00:16:26.819 "ctrlr_loss_timeout_sec": 0, 00:16:26.819 "reconnect_delay_sec": 0, 00:16:26.819 "fast_io_fail_timeout_sec": 0, 00:16:26.819 "psk": "key0", 00:16:26.819 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:26.819 "hdgst": false, 00:16:26.819 "ddgst": false, 00:16:26.819 "multipath": "multipath" 00:16:26.819 } 00:16:26.819 }, 00:16:26.819 { 00:16:26.819 "method": "bdev_nvme_set_hotplug", 00:16:26.819 "params": { 00:16:26.819 "period_us": 100000, 00:16:26.819 "enable": false 00:16:26.819 } 00:16:26.819 }, 00:16:26.819 { 00:16:26.819 "method": "bdev_wait_for_examine" 00:16:26.819 } 00:16:26.819 ] 00:16:26.819 }, 00:16:26.819 { 00:16:26.819 "subsystem": "nbd", 00:16:26.819 "config": [] 00:16:26.819 } 00:16:26.819 ] 00:16:26.819 }' 00:16:26.819 19:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 72016 00:16:26.819 19:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72016 ']' 00:16:26.819 19:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72016 00:16:26.819 19:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:26.819 19:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:26.819 19:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72016 00:16:27.078 killing process with pid 72016 00:16:27.078 Received shutdown signal, test time was about 10.000000 seconds 00:16:27.078 00:16:27.078 Latency(us) 00:16:27.078 [2024-12-05T19:36:20.515Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:27.078 [2024-12-05T19:36:20.515Z] =================================================================================================================== 00:16:27.078 [2024-12-05T19:36:20.515Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:27.078 19:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:16:27.078 19:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:16:27.078 19:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72016' 00:16:27.078 19:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72016 00:16:27.078 19:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72016 00:16:27.078 19:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 71960 00:16:27.078 19:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71960 ']' 00:16:27.078 19:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71960 00:16:27.078 19:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:27.078 19:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:27.078 19:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71960 00:16:27.078 killing process with pid 71960 00:16:27.078 19:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:27.078 19:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:27.078 19:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71960' 00:16:27.078 19:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71960 00:16:27.078 19:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71960 00:16:27.338 19:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:16:27.338 19:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:27.338 19:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:27.338 19:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:27.338 19:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:16:27.338 "subsystems": [ 00:16:27.338 { 00:16:27.338 "subsystem": "keyring", 00:16:27.338 "config": [ 00:16:27.338 { 00:16:27.338 "method": "keyring_file_add_key", 00:16:27.338 "params": { 00:16:27.338 "name": "key0", 00:16:27.338 "path": "/tmp/tmp.BrItUbweve" 00:16:27.338 } 00:16:27.338 } 00:16:27.338 ] 00:16:27.338 }, 00:16:27.338 { 00:16:27.338 "subsystem": "iobuf", 00:16:27.338 "config": [ 00:16:27.338 { 00:16:27.338 "method": "iobuf_set_options", 00:16:27.338 "params": { 00:16:27.338 "small_pool_count": 8192, 00:16:27.338 "large_pool_count": 1024, 00:16:27.338 "small_bufsize": 8192, 00:16:27.338 "large_bufsize": 135168, 00:16:27.338 "enable_numa": false 00:16:27.338 } 00:16:27.338 } 00:16:27.338 ] 00:16:27.338 }, 00:16:27.338 { 00:16:27.338 "subsystem": "sock", 00:16:27.338 "config": [ 00:16:27.338 { 00:16:27.338 "method": "sock_set_default_impl", 00:16:27.338 "params": { 00:16:27.338 "impl_name": "uring" 00:16:27.338 } 00:16:27.338 }, 00:16:27.338 { 00:16:27.338 "method": "sock_impl_set_options", 00:16:27.338 "params": { 00:16:27.338 "impl_name": "ssl", 00:16:27.338 "recv_buf_size": 4096, 00:16:27.338 "send_buf_size": 4096, 00:16:27.338 "enable_recv_pipe": true, 00:16:27.338 "enable_quickack": false, 00:16:27.338 "enable_placement_id": 0, 00:16:27.338 "enable_zerocopy_send_server": true, 00:16:27.338 "enable_zerocopy_send_client": false, 00:16:27.338 "zerocopy_threshold": 0, 00:16:27.338 "tls_version": 0, 00:16:27.338 "enable_ktls": false 00:16:27.338 } 00:16:27.338 }, 00:16:27.338 { 00:16:27.338 "method": "sock_impl_set_options", 00:16:27.338 "params": { 00:16:27.338 "impl_name": "posix", 00:16:27.338 "recv_buf_size": 2097152, 00:16:27.338 "send_buf_size": 2097152, 00:16:27.338 "enable_recv_pipe": true, 00:16:27.338 "enable_quickack": false, 00:16:27.338 "enable_placement_id": 0, 00:16:27.338 "enable_zerocopy_send_server": true, 00:16:27.338 "enable_zerocopy_send_client": false, 00:16:27.338 "zerocopy_threshold": 0, 00:16:27.338 "tls_version": 0, 00:16:27.338 "enable_ktls": false 00:16:27.338 } 00:16:27.338 }, 00:16:27.338 { 00:16:27.338 "method": "sock_impl_set_options", 00:16:27.338 "params": { 00:16:27.338 "impl_name": "uring", 00:16:27.338 "recv_buf_size": 2097152, 00:16:27.338 "send_buf_size": 2097152, 00:16:27.338 "enable_recv_pipe": true, 00:16:27.338 "enable_quickack": false, 00:16:27.338 "enable_placement_id": 0, 00:16:27.338 "enable_zerocopy_send_server": false, 00:16:27.338 "enable_zerocopy_send_client": false, 00:16:27.338 "zerocopy_threshold": 0, 00:16:27.338 "tls_version": 0, 00:16:27.338 "enable_ktls": false 00:16:27.338 } 00:16:27.338 } 00:16:27.338 ] 00:16:27.338 }, 00:16:27.338 { 00:16:27.338 "subsystem": "vmd", 00:16:27.338 "config": [] 00:16:27.338 }, 00:16:27.338 { 00:16:27.338 "subsystem": "accel", 00:16:27.338 "config": [ 00:16:27.338 { 00:16:27.338 "method": "accel_set_options", 00:16:27.338 "params": { 00:16:27.338 "small_cache_size": 128, 00:16:27.338 "large_cache_size": 16, 00:16:27.338 "task_count": 2048, 00:16:27.338 "sequence_count": 2048, 00:16:27.338 "buf_count": 2048 00:16:27.338 } 00:16:27.338 } 00:16:27.338 ] 00:16:27.338 }, 00:16:27.338 { 00:16:27.338 "subsystem": "bdev", 00:16:27.338 "config": [ 00:16:27.338 { 00:16:27.338 "method": "bdev_set_options", 00:16:27.338 "params": { 00:16:27.338 "bdev_io_pool_size": 65535, 00:16:27.338 "bdev_io_cache_size": 256, 00:16:27.338 "bdev_auto_examine": true, 00:16:27.338 "iobuf_small_cache_size": 128, 00:16:27.338 "iobuf_large_cache_size": 16 00:16:27.338 } 00:16:27.338 }, 00:16:27.338 { 00:16:27.338 "method": "bdev_raid_set_options", 00:16:27.338 "params": { 00:16:27.338 "process_window_size_kb": 1024, 00:16:27.338 "process_max_bandwidth_mb_sec": 0 00:16:27.338 } 00:16:27.338 }, 00:16:27.338 { 00:16:27.338 "method": "bdev_iscsi_set_options", 00:16:27.338 "params": { 00:16:27.338 "timeout_sec": 30 00:16:27.338 } 00:16:27.338 }, 00:16:27.338 { 00:16:27.338 "method": "bdev_nvme_set_options", 00:16:27.338 "params": { 00:16:27.338 "action_on_timeout": "none", 00:16:27.338 "timeout_us": 0, 00:16:27.338 "timeout_admin_us": 0, 00:16:27.338 "keep_alive_timeout_ms": 10000, 00:16:27.338 "arbitration_burst": 0, 00:16:27.338 "low_priority_weight": 0, 00:16:27.338 "medium_priority_weight": 0, 00:16:27.338 "high_priority_weight": 0, 00:16:27.338 "nvme_adminq_poll_period_us": 10000, 00:16:27.338 "nvme_ioq_poll_period_us": 0, 00:16:27.338 "io_queue_requests": 0, 00:16:27.338 "delay_cmd_submit": true, 00:16:27.338 "transport_retry_count": 4, 00:16:27.338 "bdev_retry_count": 3, 00:16:27.338 "transport_ack_timeout": 0, 00:16:27.338 "ctrlr_loss_timeout_sec": 0, 00:16:27.338 "reconnect_delay_sec": 0, 00:16:27.338 "fast_io_fail_timeout_sec": 0, 00:16:27.338 "disable_auto_failback": false, 00:16:27.338 "generate_uuids": false, 00:16:27.338 "transport_tos": 0, 00:16:27.338 "nvme_error_stat": false, 00:16:27.338 "rdma_srq_size": 0, 00:16:27.338 "io_path_stat": false, 00:16:27.338 "allow_accel_sequence": false, 00:16:27.338 "rdma_max_cq_size": 0, 00:16:27.338 "rdma_cm_event_timeout_ms": 0, 00:16:27.338 "dhchap_digests": [ 00:16:27.338 "sha256", 00:16:27.338 "sha384", 00:16:27.338 "sha512" 00:16:27.338 ], 00:16:27.338 "dhchap_dhgroups": [ 00:16:27.338 "null", 00:16:27.338 "ffdhe2048", 00:16:27.338 "ffdhe3072", 00:16:27.338 "ffdhe4096", 00:16:27.338 "ffdhe6144", 00:16:27.338 "ffdhe8192" 00:16:27.338 ] 00:16:27.338 } 00:16:27.338 }, 00:16:27.338 { 00:16:27.338 "method": "bdev_nvme_set_hotplug", 00:16:27.338 "params": { 00:16:27.338 "period_us": 100000, 00:16:27.338 "enable": false 00:16:27.338 } 00:16:27.338 }, 00:16:27.338 { 00:16:27.338 "method": "bdev_malloc_create", 00:16:27.338 "params": { 00:16:27.338 "name": "malloc0", 00:16:27.338 "num_blocks": 8192, 00:16:27.338 "block_size": 4096, 00:16:27.338 "physical_block_size": 4096, 00:16:27.338 "uuid": "d38663ae-0e70-40fb-b76c-2991d5edd0fa", 00:16:27.338 "optimal_io_boundary": 0, 00:16:27.338 "md_size": 0, 00:16:27.338 "dif_type": 0, 00:16:27.338 "dif_is_head_of_md": false, 00:16:27.338 "dif_pi_format": 0 00:16:27.338 } 00:16:27.338 }, 00:16:27.338 { 00:16:27.338 "method": "bdev_wait_for_examine" 00:16:27.338 } 00:16:27.338 ] 00:16:27.338 }, 00:16:27.338 { 00:16:27.338 "subsystem": "nbd", 00:16:27.338 "config": [] 00:16:27.338 }, 00:16:27.338 { 00:16:27.338 "subsystem": "scheduler", 00:16:27.338 "config": [ 00:16:27.338 { 00:16:27.338 "method": "framework_set_scheduler", 00:16:27.338 "params": { 00:16:27.338 "name": "static" 00:16:27.338 } 00:16:27.339 } 00:16:27.339 ] 00:16:27.339 }, 00:16:27.339 { 00:16:27.339 "subsystem": "nvmf", 00:16:27.339 "config": [ 00:16:27.339 { 00:16:27.339 "method": "nvmf_set_config", 00:16:27.339 "params": { 00:16:27.339 "discovery_filter": "match_any", 00:16:27.339 "admin_cmd_passthru": { 00:16:27.339 "identify_ctrlr": false 00:16:27.339 }, 00:16:27.339 "dhchap_digests": [ 00:16:27.339 "sha256", 00:16:27.339 "sha384", 00:16:27.339 "sha512" 00:16:27.339 ], 00:16:27.339 "dhchap_dhgroups": [ 00:16:27.339 "null", 00:16:27.339 "ffdhe2048", 00:16:27.339 "ffdhe3072", 00:16:27.339 "ffdhe4096", 00:16:27.339 "ffdhe6144", 00:16:27.339 "ffdhe8192" 00:16:27.339 ] 00:16:27.339 } 00:16:27.339 }, 00:16:27.339 { 00:16:27.339 "method": "nvmf_set_max_subsystems", 00:16:27.339 "params": { 00:16:27.339 "max_subsystems": 1024 00:16:27.339 } 00:16:27.339 }, 00:16:27.339 { 00:16:27.339 "method": "nvmf_set_crdt", 00:16:27.339 "params": { 00:16:27.339 "crdt1": 0, 00:16:27.339 "crdt2": 0, 00:16:27.339 "crdt3": 0 00:16:27.339 } 00:16:27.339 }, 00:16:27.339 { 00:16:27.339 "method": "nvmf_create_transport", 00:16:27.339 "params": { 00:16:27.339 "trtype": "TCP", 00:16:27.339 "max_queue_depth": 128, 00:16:27.339 "max_io_qpairs_per_ctrlr": 127, 00:16:27.339 "in_capsule_data_size": 4096, 00:16:27.339 "max_io_size": 131072, 00:16:27.339 "io_unit_size": 131072, 00:16:27.339 "max_aq_depth": 128, 00:16:27.339 "num_shared_buffers": 511, 00:16:27.339 "buf_cache_size": 4294967295, 00:16:27.339 "dif_insert_or_strip": false, 00:16:27.339 "zcopy": false, 00:16:27.339 "c2h_success": false, 00:16:27.339 "sock_priority": 0, 00:16:27.339 "abort_timeout_sec": 1, 00:16:27.339 "ack_timeout": 0, 00:16:27.339 "data_wr_pool_size": 0 00:16:27.339 } 00:16:27.339 }, 00:16:27.339 { 00:16:27.339 "method": "nvmf_create_subsystem", 00:16:27.339 "params": { 00:16:27.339 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:27.339 "allow_any_host": false, 00:16:27.339 "serial_number": "SPDK00000000000001", 00:16:27.339 "model_number": "SPDK bdev Controller", 00:16:27.339 "max_namespaces": 10, 00:16:27.339 "min_cntlid": 1, 00:16:27.339 "max_cntlid": 65519, 00:16:27.339 "ana_reporting": false 00:16:27.339 } 00:16:27.339 }, 00:16:27.339 { 00:16:27.339 "method": "nvmf_subsystem_add_host", 00:16:27.339 "params": { 00:16:27.339 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:27.339 "host": "nqn.2016-06.io.spdk:host1", 00:16:27.339 "psk": "key0" 00:16:27.339 } 00:16:27.339 }, 00:16:27.339 { 00:16:27.339 "method": "nvmf_subsystem_add_ns", 00:16:27.339 "params": { 00:16:27.339 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:27.339 "namespace": { 00:16:27.339 "nsid": 1, 00:16:27.339 "bdev_name": "malloc0", 00:16:27.339 "nguid": "D38663AE0E7040FBB76C2991D5EDD0FA", 00:16:27.339 "uuid": "d38663ae-0e70-40fb-b76c-2991d5edd0fa", 00:16:27.339 "no_auto_visible": false 00:16:27.339 } 00:16:27.339 } 00:16:27.339 }, 00:16:27.339 { 00:16:27.339 "method": "nvmf_subsystem_add_listener", 00:16:27.339 "params": { 00:16:27.339 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:27.339 "listen_address": { 00:16:27.339 "trtype": "TCP", 00:16:27.339 "adrfam": "IPv4", 00:16:27.339 "traddr": "10.0.0.3", 00:16:27.339 "trsvcid": "4420" 00:16:27.339 }, 00:16:27.339 "secure_channel": true 00:16:27.339 } 00:16:27.339 } 00:16:27.339 ] 00:16:27.339 } 00:16:27.339 ] 00:16:27.339 }' 00:16:27.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:27.339 19:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72071 00:16:27.339 19:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:16:27.339 19:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72071 00:16:27.339 19:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72071 ']' 00:16:27.339 19:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:27.339 19:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:27.339 19:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:27.339 19:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:27.339 19:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:27.339 [2024-12-05 19:36:20.748264] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:16:27.339 [2024-12-05 19:36:20.748676] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:27.598 [2024-12-05 19:36:20.891633] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:27.598 [2024-12-05 19:36:20.942030] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:27.598 [2024-12-05 19:36:20.942371] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:27.598 [2024-12-05 19:36:20.942603] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:27.598 [2024-12-05 19:36:20.942742] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:27.598 [2024-12-05 19:36:20.942776] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:27.598 [2024-12-05 19:36:20.943358] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:27.857 [2024-12-05 19:36:21.111163] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:27.857 [2024-12-05 19:36:21.189100] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:27.857 [2024-12-05 19:36:21.221056] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:27.857 [2024-12-05 19:36:21.221570] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:28.426 19:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:28.426 19:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:28.426 19:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:28.426 19:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:28.426 19:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:28.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:28.426 19:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:28.426 19:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=72103 00:16:28.426 19:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 72103 /var/tmp/bdevperf.sock 00:16:28.426 19:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72103 ']' 00:16:28.426 19:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:28.426 19:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:28.426 19:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:28.426 19:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:28.426 19:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:28.426 19:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:16:28.426 19:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:16:28.426 "subsystems": [ 00:16:28.426 { 00:16:28.426 "subsystem": "keyring", 00:16:28.426 "config": [ 00:16:28.426 { 00:16:28.426 "method": "keyring_file_add_key", 00:16:28.427 "params": { 00:16:28.427 "name": "key0", 00:16:28.427 "path": "/tmp/tmp.BrItUbweve" 00:16:28.427 } 00:16:28.427 } 00:16:28.427 ] 00:16:28.427 }, 00:16:28.427 { 00:16:28.427 "subsystem": "iobuf", 00:16:28.427 "config": [ 00:16:28.427 { 00:16:28.427 "method": "iobuf_set_options", 00:16:28.427 "params": { 00:16:28.427 "small_pool_count": 8192, 00:16:28.427 "large_pool_count": 1024, 00:16:28.427 "small_bufsize": 8192, 00:16:28.427 "large_bufsize": 135168, 00:16:28.427 "enable_numa": false 00:16:28.427 } 00:16:28.427 } 00:16:28.427 ] 00:16:28.427 }, 00:16:28.427 { 00:16:28.427 "subsystem": "sock", 00:16:28.427 "config": [ 00:16:28.427 { 00:16:28.427 "method": "sock_set_default_impl", 00:16:28.427 "params": { 00:16:28.427 "impl_name": "uring" 00:16:28.427 } 00:16:28.427 }, 00:16:28.427 { 00:16:28.427 "method": "sock_impl_set_options", 00:16:28.427 "params": { 00:16:28.427 "impl_name": "ssl", 00:16:28.427 "recv_buf_size": 4096, 00:16:28.427 "send_buf_size": 4096, 00:16:28.427 "enable_recv_pipe": true, 00:16:28.427 "enable_quickack": false, 00:16:28.427 "enable_placement_id": 0, 00:16:28.427 "enable_zerocopy_send_server": true, 00:16:28.427 "enable_zerocopy_send_client": false, 00:16:28.427 "zerocopy_threshold": 0, 00:16:28.427 "tls_version": 0, 00:16:28.427 "enable_ktls": false 00:16:28.427 } 00:16:28.427 }, 00:16:28.427 { 00:16:28.427 "method": "sock_impl_set_options", 00:16:28.427 "params": { 00:16:28.427 "impl_name": "posix", 00:16:28.427 "recv_buf_size": 2097152, 00:16:28.427 "send_buf_size": 2097152, 00:16:28.427 "enable_recv_pipe": true, 00:16:28.427 "enable_quickack": false, 00:16:28.427 "enable_placement_id": 0, 00:16:28.427 "enable_zerocopy_send_server": true, 00:16:28.427 "enable_zerocopy_send_client": false, 00:16:28.427 "zerocopy_threshold": 0, 00:16:28.427 "tls_version": 0, 00:16:28.427 "enable_ktls": false 00:16:28.427 } 00:16:28.427 }, 00:16:28.427 { 00:16:28.427 "method": "sock_impl_set_options", 00:16:28.427 "params": { 00:16:28.427 "impl_name": "uring", 00:16:28.427 "recv_buf_size": 2097152, 00:16:28.427 "send_buf_size": 2097152, 00:16:28.427 "enable_recv_pipe": true, 00:16:28.427 "enable_quickack": false, 00:16:28.427 "enable_placement_id": 0, 00:16:28.427 "enable_zerocopy_send_server": false, 00:16:28.427 "enable_zerocopy_send_client": false, 00:16:28.427 "zerocopy_threshold": 0, 00:16:28.427 "tls_version": 0, 00:16:28.427 "enable_ktls": false 00:16:28.427 } 00:16:28.427 } 00:16:28.427 ] 00:16:28.427 }, 00:16:28.427 { 00:16:28.427 "subsystem": "vmd", 00:16:28.427 "config": [] 00:16:28.427 }, 00:16:28.427 { 00:16:28.427 "subsystem": "accel", 00:16:28.427 "config": [ 00:16:28.427 { 00:16:28.427 "method": "accel_set_options", 00:16:28.427 "params": { 00:16:28.427 "small_cache_size": 128, 00:16:28.427 "large_cache_size": 16, 00:16:28.427 "task_count": 2048, 00:16:28.427 "sequence_count": 2048, 00:16:28.427 "buf_count": 2048 00:16:28.427 } 00:16:28.427 } 00:16:28.427 ] 00:16:28.427 }, 00:16:28.427 { 00:16:28.427 "subsystem": "bdev", 00:16:28.427 "config": [ 00:16:28.427 { 00:16:28.427 "method": "bdev_set_options", 00:16:28.427 "params": { 00:16:28.427 "bdev_io_pool_size": 65535, 00:16:28.427 "bdev_io_cache_size": 256, 00:16:28.427 "bdev_auto_examine": true, 00:16:28.427 "iobuf_small_cache_size": 128, 00:16:28.427 "iobuf_large_cache_size": 16 00:16:28.427 } 00:16:28.427 }, 00:16:28.427 { 00:16:28.427 "method": "bdev_raid_set_options", 00:16:28.427 "params": { 00:16:28.427 "process_window_size_kb": 1024, 00:16:28.427 "process_max_bandwidth_mb_sec": 0 00:16:28.427 } 00:16:28.427 }, 00:16:28.427 { 00:16:28.427 "method": "bdev_iscsi_set_options", 00:16:28.427 "params": { 00:16:28.427 "timeout_sec": 30 00:16:28.427 } 00:16:28.427 }, 00:16:28.427 { 00:16:28.427 "method": "bdev_nvme_set_options", 00:16:28.427 "params": { 00:16:28.427 "action_on_timeout": "none", 00:16:28.427 "timeout_us": 0, 00:16:28.427 "timeout_admin_us": 0, 00:16:28.427 "keep_alive_timeout_ms": 10000, 00:16:28.427 "arbitration_burst": 0, 00:16:28.427 "low_priority_weight": 0, 00:16:28.427 "medium_priority_weight": 0, 00:16:28.427 "high_priority_weight": 0, 00:16:28.427 "nvme_adminq_poll_period_us": 10000, 00:16:28.427 "nvme_ioq_poll_period_us": 0, 00:16:28.427 "io_queue_requests": 512, 00:16:28.427 "delay_cmd_submit": true, 00:16:28.427 "transport_retry_count": 4, 00:16:28.427 "bdev_retry_count": 3, 00:16:28.427 "transport_ack_timeout": 0, 00:16:28.427 "ctrlr_loss_timeout_sec": 0, 00:16:28.427 "reconnect_delay_sec": 0, 00:16:28.427 "fast_io_fail_timeout_sec": 0, 00:16:28.427 "disable_auto_failback": false, 00:16:28.427 "generate_uuids": false, 00:16:28.427 "transport_tos": 0, 00:16:28.427 "nvme_error_stat": false, 00:16:28.427 "rdma_srq_size": 0, 00:16:28.427 "io_path_stat": false, 00:16:28.427 "allow_accel_sequence": false, 00:16:28.427 "rdma_max_cq_size": 0, 00:16:28.427 "rdma_cm_event_timeout_ms": 0, 00:16:28.427 "dhchap_digests": [ 00:16:28.427 "sha256", 00:16:28.427 "sha384", 00:16:28.427 "sha512" 00:16:28.427 ], 00:16:28.427 "dhchap_dhgroups": [ 00:16:28.427 "null", 00:16:28.427 "ffdhe2048", 00:16:28.427 "ffdhe3072", 00:16:28.427 "ffdhe4096", 00:16:28.427 "ffdhe6144", 00:16:28.427 "ffdhe8192" 00:16:28.427 ] 00:16:28.427 } 00:16:28.427 }, 00:16:28.427 { 00:16:28.427 "method": "bdev_nvme_attach_controller", 00:16:28.427 "params": { 00:16:28.427 "name": "TLSTEST", 00:16:28.427 "trtype": "TCP", 00:16:28.427 "adrfam": "IPv4", 00:16:28.427 "traddr": "10.0.0.3", 00:16:28.427 "trsvcid": "4420", 00:16:28.427 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:28.427 "prchk_reftag": false, 00:16:28.427 "prchk_guard": false, 00:16:28.427 "ctrlr_loss_timeout_sec": 0, 00:16:28.427 "reconnect_delay_sec": 0, 00:16:28.427 "fast_io_fail_timeout_sec": 0, 00:16:28.427 "psk": "key0", 00:16:28.427 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:28.427 "hdgst": false, 00:16:28.427 "ddgst": false, 00:16:28.427 "multipath": "multipath" 00:16:28.427 } 00:16:28.427 }, 00:16:28.427 { 00:16:28.427 "method": "bdev_nvme_set_hotplug", 00:16:28.427 "params": { 00:16:28.427 "period_us": 100000, 00:16:28.427 "enable": false 00:16:28.427 } 00:16:28.427 }, 00:16:28.427 { 00:16:28.427 "method": "bdev_wait_for_examine" 00:16:28.427 } 00:16:28.427 ] 00:16:28.427 }, 00:16:28.427 { 00:16:28.427 "subsystem": "nbd", 00:16:28.427 "config": [] 00:16:28.427 } 00:16:28.427 ] 00:16:28.427 }' 00:16:28.427 [2024-12-05 19:36:21.824277] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:16:28.427 [2024-12-05 19:36:21.824373] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72103 ] 00:16:28.687 [2024-12-05 19:36:21.974922] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:28.687 [2024-12-05 19:36:22.036419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:28.946 [2024-12-05 19:36:22.175559] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:28.946 [2024-12-05 19:36:22.225420] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:29.514 19:36:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:29.514 19:36:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:29.514 19:36:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:16:29.773 Running I/O for 10 seconds... 00:16:31.677 4224.00 IOPS, 16.50 MiB/s [2024-12-05T19:36:26.047Z] 4288.00 IOPS, 16.75 MiB/s [2024-12-05T19:36:27.425Z] 4284.33 IOPS, 16.74 MiB/s [2024-12-05T19:36:28.357Z] 4331.75 IOPS, 16.92 MiB/s [2024-12-05T19:36:29.292Z] 4363.80 IOPS, 17.05 MiB/s [2024-12-05T19:36:30.229Z] 4376.33 IOPS, 17.10 MiB/s [2024-12-05T19:36:31.165Z] 4391.71 IOPS, 17.16 MiB/s [2024-12-05T19:36:32.101Z] 4414.38 IOPS, 17.24 MiB/s [2024-12-05T19:36:33.038Z] 4436.22 IOPS, 17.33 MiB/s [2024-12-05T19:36:33.038Z] 4448.90 IOPS, 17.38 MiB/s 00:16:39.601 Latency(us) 00:16:39.601 [2024-12-05T19:36:33.038Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:39.601 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:39.601 Verification LBA range: start 0x0 length 0x2000 00:16:39.601 TLSTESTn1 : 10.01 4455.16 17.40 0.00 0.00 28680.53 5034.36 21924.77 00:16:39.601 [2024-12-05T19:36:33.038Z] =================================================================================================================== 00:16:39.601 [2024-12-05T19:36:33.038Z] Total : 4455.16 17.40 0.00 0.00 28680.53 5034.36 21924.77 00:16:39.601 { 00:16:39.601 "results": [ 00:16:39.601 { 00:16:39.601 "job": "TLSTESTn1", 00:16:39.601 "core_mask": "0x4", 00:16:39.601 "workload": "verify", 00:16:39.601 "status": "finished", 00:16:39.601 "verify_range": { 00:16:39.601 "start": 0, 00:16:39.601 "length": 8192 00:16:39.601 }, 00:16:39.601 "queue_depth": 128, 00:16:39.601 "io_size": 4096, 00:16:39.601 "runtime": 10.014448, 00:16:39.601 "iops": 4455.163180237193, 00:16:39.601 "mibps": 17.402981172801535, 00:16:39.601 "io_failed": 0, 00:16:39.601 "io_timeout": 0, 00:16:39.601 "avg_latency_us": 28680.525571584592, 00:16:39.601 "min_latency_us": 5034.356363636363, 00:16:39.601 "max_latency_us": 21924.77090909091 00:16:39.601 } 00:16:39.601 ], 00:16:39.601 "core_count": 1 00:16:39.601 } 00:16:39.601 19:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:39.601 19:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 72103 00:16:39.601 19:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72103 ']' 00:16:39.601 19:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72103 00:16:39.601 19:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:39.860 19:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:39.860 19:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72103 00:16:39.860 19:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:16:39.860 19:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:16:39.860 killing process with pid 72103 00:16:39.860 19:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72103' 00:16:39.860 Received shutdown signal, test time was about 10.000000 seconds 00:16:39.860 00:16:39.860 Latency(us) 00:16:39.860 [2024-12-05T19:36:33.297Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:39.860 [2024-12-05T19:36:33.297Z] =================================================================================================================== 00:16:39.860 [2024-12-05T19:36:33.297Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:39.860 19:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72103 00:16:39.860 19:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72103 00:16:39.860 19:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 72071 00:16:39.860 19:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72071 ']' 00:16:39.860 19:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72071 00:16:39.860 19:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:39.860 19:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:39.860 19:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72071 00:16:39.860 killing process with pid 72071 00:16:39.860 19:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:39.860 19:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:39.860 19:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72071' 00:16:39.860 19:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72071 00:16:39.860 19:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72071 00:16:40.119 19:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:16:40.119 19:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:40.119 19:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:40.119 19:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:40.119 19:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72236 00:16:40.119 19:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:16:40.119 19:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72236 00:16:40.119 19:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72236 ']' 00:16:40.119 19:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:40.119 19:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:40.119 19:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:40.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:40.119 19:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:40.119 19:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:40.378 [2024-12-05 19:36:33.558586] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:16:40.378 [2024-12-05 19:36:33.558710] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:40.378 [2024-12-05 19:36:33.715433] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:40.378 [2024-12-05 19:36:33.773519] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:40.378 [2024-12-05 19:36:33.773584] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:40.378 [2024-12-05 19:36:33.773598] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:40.378 [2024-12-05 19:36:33.773608] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:40.378 [2024-12-05 19:36:33.773617] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:40.378 [2024-12-05 19:36:33.774040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:40.637 [2024-12-05 19:36:33.833372] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:41.206 19:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:41.206 19:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:41.206 19:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:41.206 19:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:41.206 19:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:41.206 19:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:41.206 19:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.BrItUbweve 00:16:41.206 19:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.BrItUbweve 00:16:41.206 19:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:41.466 [2024-12-05 19:36:34.862035] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:41.466 19:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:41.725 19:36:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:16:41.984 [2024-12-05 19:36:35.382166] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:41.984 [2024-12-05 19:36:35.382448] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:41.984 19:36:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:42.551 malloc0 00:16:42.552 19:36:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:42.552 19:36:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.BrItUbweve 00:16:42.811 19:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:16:43.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:43.388 19:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=72297 00:16:43.388 19:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:16:43.388 19:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:43.388 19:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 72297 /var/tmp/bdevperf.sock 00:16:43.388 19:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72297 ']' 00:16:43.388 19:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:43.388 19:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:43.388 19:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:43.388 19:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:43.388 19:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:43.388 [2024-12-05 19:36:36.563564] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:16:43.388 [2024-12-05 19:36:36.563858] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72297 ] 00:16:43.388 [2024-12-05 19:36:36.710293] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:43.388 [2024-12-05 19:36:36.767782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:43.659 [2024-12-05 19:36:36.824136] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:43.659 19:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:43.659 19:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:43.660 19:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.BrItUbweve 00:16:43.919 19:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:16:44.178 [2024-12-05 19:36:37.409635] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:44.178 nvme0n1 00:16:44.178 19:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:44.437 Running I/O for 1 seconds... 00:16:45.374 4352.00 IOPS, 17.00 MiB/s 00:16:45.374 Latency(us) 00:16:45.374 [2024-12-05T19:36:38.811Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:45.374 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:45.374 Verification LBA range: start 0x0 length 0x2000 00:16:45.374 nvme0n1 : 1.03 4354.77 17.01 0.00 0.00 29080.59 7238.75 18826.71 00:16:45.374 [2024-12-05T19:36:38.811Z] =================================================================================================================== 00:16:45.374 [2024-12-05T19:36:38.811Z] Total : 4354.77 17.01 0.00 0.00 29080.59 7238.75 18826.71 00:16:45.374 { 00:16:45.374 "results": [ 00:16:45.374 { 00:16:45.374 "job": "nvme0n1", 00:16:45.374 "core_mask": "0x2", 00:16:45.374 "workload": "verify", 00:16:45.374 "status": "finished", 00:16:45.374 "verify_range": { 00:16:45.374 "start": 0, 00:16:45.374 "length": 8192 00:16:45.374 }, 00:16:45.374 "queue_depth": 128, 00:16:45.374 "io_size": 4096, 00:16:45.374 "runtime": 1.028758, 00:16:45.374 "iops": 4354.76564945303, 00:16:45.374 "mibps": 17.0108033181759, 00:16:45.374 "io_failed": 0, 00:16:45.374 "io_timeout": 0, 00:16:45.374 "avg_latency_us": 29080.585974025973, 00:16:45.374 "min_latency_us": 7238.749090909091, 00:16:45.374 "max_latency_us": 18826.705454545456 00:16:45.374 } 00:16:45.374 ], 00:16:45.374 "core_count": 1 00:16:45.374 } 00:16:45.374 19:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 72297 00:16:45.374 19:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72297 ']' 00:16:45.374 19:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72297 00:16:45.374 19:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:45.374 19:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:45.374 19:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72297 00:16:45.374 19:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:45.374 killing process with pid 72297 00:16:45.374 19:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:45.374 19:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72297' 00:16:45.374 19:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72297 00:16:45.374 Received shutdown signal, test time was about 1.000000 seconds 00:16:45.374 00:16:45.374 Latency(us) 00:16:45.374 [2024-12-05T19:36:38.811Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:45.374 [2024-12-05T19:36:38.811Z] =================================================================================================================== 00:16:45.374 [2024-12-05T19:36:38.811Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:45.375 19:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72297 00:16:45.634 19:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 72236 00:16:45.634 19:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72236 ']' 00:16:45.634 19:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72236 00:16:45.634 19:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:45.634 19:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:45.634 19:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72236 00:16:45.634 killing process with pid 72236 00:16:45.634 19:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:45.634 19:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:45.634 19:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72236' 00:16:45.634 19:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72236 00:16:45.634 19:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72236 00:16:45.892 19:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:16:45.893 19:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:45.893 19:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:45.893 19:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:45.893 19:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72341 00:16:45.893 19:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:16:45.893 19:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72341 00:16:45.893 19:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72341 ']' 00:16:45.893 19:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:45.893 19:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:45.893 19:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:45.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:45.893 19:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:45.893 19:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:45.893 [2024-12-05 19:36:39.228960] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:16:45.893 [2024-12-05 19:36:39.229409] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:46.152 [2024-12-05 19:36:39.378381] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:46.152 [2024-12-05 19:36:39.426928] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:46.152 [2024-12-05 19:36:39.426985] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:46.152 [2024-12-05 19:36:39.427011] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:46.152 [2024-12-05 19:36:39.427019] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:46.152 [2024-12-05 19:36:39.427025] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:46.152 [2024-12-05 19:36:39.427496] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:46.152 [2024-12-05 19:36:39.484304] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:47.088 19:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:47.088 19:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:47.088 19:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:47.088 19:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:47.088 19:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:47.088 19:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:47.088 19:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:16:47.088 19:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.088 19:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:47.088 [2024-12-05 19:36:40.275432] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:47.088 malloc0 00:16:47.088 [2024-12-05 19:36:40.306752] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:47.089 [2024-12-05 19:36:40.306959] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:47.089 19:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.089 19:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=72373 00:16:47.089 19:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:16:47.089 19:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 72373 /var/tmp/bdevperf.sock 00:16:47.089 19:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72373 ']' 00:16:47.089 19:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:47.089 19:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:47.089 19:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:47.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:47.089 19:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:47.089 19:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:47.089 [2024-12-05 19:36:40.386178] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:16:47.089 [2024-12-05 19:36:40.386517] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72373 ] 00:16:47.349 [2024-12-05 19:36:40.529635] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:47.349 [2024-12-05 19:36:40.582855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:47.349 [2024-12-05 19:36:40.637218] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:47.349 19:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:47.349 19:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:47.349 19:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.BrItUbweve 00:16:47.608 19:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:16:47.868 [2024-12-05 19:36:41.171568] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:47.868 nvme0n1 00:16:47.868 19:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:48.127 Running I/O for 1 seconds... 00:16:49.109 4224.00 IOPS, 16.50 MiB/s 00:16:49.109 Latency(us) 00:16:49.109 [2024-12-05T19:36:42.546Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:49.109 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:49.109 Verification LBA range: start 0x0 length 0x2000 00:16:49.109 nvme0n1 : 1.02 4275.50 16.70 0.00 0.00 29639.31 7626.01 18707.55 00:16:49.109 [2024-12-05T19:36:42.546Z] =================================================================================================================== 00:16:49.109 [2024-12-05T19:36:42.546Z] Total : 4275.50 16.70 0.00 0.00 29639.31 7626.01 18707.55 00:16:49.109 { 00:16:49.109 "results": [ 00:16:49.109 { 00:16:49.109 "job": "nvme0n1", 00:16:49.109 "core_mask": "0x2", 00:16:49.109 "workload": "verify", 00:16:49.109 "status": "finished", 00:16:49.109 "verify_range": { 00:16:49.109 "start": 0, 00:16:49.109 "length": 8192 00:16:49.109 }, 00:16:49.109 "queue_depth": 128, 00:16:49.109 "io_size": 4096, 00:16:49.109 "runtime": 1.017893, 00:16:49.109 "iops": 4275.4985052456395, 00:16:49.109 "mibps": 16.70116603611578, 00:16:49.109 "io_failed": 0, 00:16:49.109 "io_timeout": 0, 00:16:49.109 "avg_latency_us": 29639.31037433155, 00:16:49.109 "min_latency_us": 7626.007272727273, 00:16:49.109 "max_latency_us": 18707.54909090909 00:16:49.109 } 00:16:49.109 ], 00:16:49.109 "core_count": 1 00:16:49.109 } 00:16:49.109 19:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:16:49.109 19:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.109 19:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:49.389 19:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.389 19:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:16:49.389 "subsystems": [ 00:16:49.389 { 00:16:49.389 "subsystem": "keyring", 00:16:49.389 "config": [ 00:16:49.389 { 00:16:49.389 "method": "keyring_file_add_key", 00:16:49.389 "params": { 00:16:49.389 "name": "key0", 00:16:49.389 "path": "/tmp/tmp.BrItUbweve" 00:16:49.389 } 00:16:49.389 } 00:16:49.389 ] 00:16:49.389 }, 00:16:49.389 { 00:16:49.389 "subsystem": "iobuf", 00:16:49.389 "config": [ 00:16:49.389 { 00:16:49.389 "method": "iobuf_set_options", 00:16:49.389 "params": { 00:16:49.389 "small_pool_count": 8192, 00:16:49.389 "large_pool_count": 1024, 00:16:49.389 "small_bufsize": 8192, 00:16:49.389 "large_bufsize": 135168, 00:16:49.389 "enable_numa": false 00:16:49.389 } 00:16:49.389 } 00:16:49.389 ] 00:16:49.389 }, 00:16:49.389 { 00:16:49.389 "subsystem": "sock", 00:16:49.389 "config": [ 00:16:49.389 { 00:16:49.389 "method": "sock_set_default_impl", 00:16:49.389 "params": { 00:16:49.389 "impl_name": "uring" 00:16:49.389 } 00:16:49.389 }, 00:16:49.389 { 00:16:49.389 "method": "sock_impl_set_options", 00:16:49.389 "params": { 00:16:49.389 "impl_name": "ssl", 00:16:49.389 "recv_buf_size": 4096, 00:16:49.389 "send_buf_size": 4096, 00:16:49.389 "enable_recv_pipe": true, 00:16:49.389 "enable_quickack": false, 00:16:49.389 "enable_placement_id": 0, 00:16:49.389 "enable_zerocopy_send_server": true, 00:16:49.389 "enable_zerocopy_send_client": false, 00:16:49.389 "zerocopy_threshold": 0, 00:16:49.389 "tls_version": 0, 00:16:49.389 "enable_ktls": false 00:16:49.389 } 00:16:49.389 }, 00:16:49.389 { 00:16:49.389 "method": "sock_impl_set_options", 00:16:49.389 "params": { 00:16:49.389 "impl_name": "posix", 00:16:49.389 "recv_buf_size": 2097152, 00:16:49.389 "send_buf_size": 2097152, 00:16:49.389 "enable_recv_pipe": true, 00:16:49.389 "enable_quickack": false, 00:16:49.389 "enable_placement_id": 0, 00:16:49.389 "enable_zerocopy_send_server": true, 00:16:49.389 "enable_zerocopy_send_client": false, 00:16:49.389 "zerocopy_threshold": 0, 00:16:49.389 "tls_version": 0, 00:16:49.389 "enable_ktls": false 00:16:49.389 } 00:16:49.389 }, 00:16:49.389 { 00:16:49.389 "method": "sock_impl_set_options", 00:16:49.389 "params": { 00:16:49.389 "impl_name": "uring", 00:16:49.389 "recv_buf_size": 2097152, 00:16:49.389 "send_buf_size": 2097152, 00:16:49.389 "enable_recv_pipe": true, 00:16:49.389 "enable_quickack": false, 00:16:49.389 "enable_placement_id": 0, 00:16:49.389 "enable_zerocopy_send_server": false, 00:16:49.389 "enable_zerocopy_send_client": false, 00:16:49.389 "zerocopy_threshold": 0, 00:16:49.389 "tls_version": 0, 00:16:49.389 "enable_ktls": false 00:16:49.389 } 00:16:49.389 } 00:16:49.389 ] 00:16:49.389 }, 00:16:49.389 { 00:16:49.389 "subsystem": "vmd", 00:16:49.389 "config": [] 00:16:49.389 }, 00:16:49.389 { 00:16:49.389 "subsystem": "accel", 00:16:49.389 "config": [ 00:16:49.389 { 00:16:49.389 "method": "accel_set_options", 00:16:49.389 "params": { 00:16:49.389 "small_cache_size": 128, 00:16:49.389 "large_cache_size": 16, 00:16:49.389 "task_count": 2048, 00:16:49.389 "sequence_count": 2048, 00:16:49.389 "buf_count": 2048 00:16:49.389 } 00:16:49.389 } 00:16:49.389 ] 00:16:49.389 }, 00:16:49.389 { 00:16:49.389 "subsystem": "bdev", 00:16:49.389 "config": [ 00:16:49.389 { 00:16:49.389 "method": "bdev_set_options", 00:16:49.389 "params": { 00:16:49.389 "bdev_io_pool_size": 65535, 00:16:49.389 "bdev_io_cache_size": 256, 00:16:49.389 "bdev_auto_examine": true, 00:16:49.389 "iobuf_small_cache_size": 128, 00:16:49.389 "iobuf_large_cache_size": 16 00:16:49.389 } 00:16:49.389 }, 00:16:49.389 { 00:16:49.389 "method": "bdev_raid_set_options", 00:16:49.389 "params": { 00:16:49.389 "process_window_size_kb": 1024, 00:16:49.389 "process_max_bandwidth_mb_sec": 0 00:16:49.389 } 00:16:49.389 }, 00:16:49.389 { 00:16:49.389 "method": "bdev_iscsi_set_options", 00:16:49.389 "params": { 00:16:49.389 "timeout_sec": 30 00:16:49.389 } 00:16:49.389 }, 00:16:49.389 { 00:16:49.389 "method": "bdev_nvme_set_options", 00:16:49.389 "params": { 00:16:49.389 "action_on_timeout": "none", 00:16:49.389 "timeout_us": 0, 00:16:49.389 "timeout_admin_us": 0, 00:16:49.389 "keep_alive_timeout_ms": 10000, 00:16:49.389 "arbitration_burst": 0, 00:16:49.389 "low_priority_weight": 0, 00:16:49.389 "medium_priority_weight": 0, 00:16:49.389 "high_priority_weight": 0, 00:16:49.389 "nvme_adminq_poll_period_us": 10000, 00:16:49.389 "nvme_ioq_poll_period_us": 0, 00:16:49.389 "io_queue_requests": 0, 00:16:49.389 "delay_cmd_submit": true, 00:16:49.389 "transport_retry_count": 4, 00:16:49.389 "bdev_retry_count": 3, 00:16:49.389 "transport_ack_timeout": 0, 00:16:49.389 "ctrlr_loss_timeout_sec": 0, 00:16:49.389 "reconnect_delay_sec": 0, 00:16:49.389 "fast_io_fail_timeout_sec": 0, 00:16:49.389 "disable_auto_failback": false, 00:16:49.389 "generate_uuids": false, 00:16:49.389 "transport_tos": 0, 00:16:49.389 "nvme_error_stat": false, 00:16:49.389 "rdma_srq_size": 0, 00:16:49.389 "io_path_stat": false, 00:16:49.389 "allow_accel_sequence": false, 00:16:49.389 "rdma_max_cq_size": 0, 00:16:49.389 "rdma_cm_event_timeout_ms": 0, 00:16:49.389 "dhchap_digests": [ 00:16:49.389 "sha256", 00:16:49.389 "sha384", 00:16:49.389 "sha512" 00:16:49.389 ], 00:16:49.389 "dhchap_dhgroups": [ 00:16:49.389 "null", 00:16:49.389 "ffdhe2048", 00:16:49.389 "ffdhe3072", 00:16:49.389 "ffdhe4096", 00:16:49.389 "ffdhe6144", 00:16:49.389 "ffdhe8192" 00:16:49.389 ] 00:16:49.389 } 00:16:49.389 }, 00:16:49.389 { 00:16:49.389 "method": "bdev_nvme_set_hotplug", 00:16:49.389 "params": { 00:16:49.389 "period_us": 100000, 00:16:49.389 "enable": false 00:16:49.389 } 00:16:49.389 }, 00:16:49.389 { 00:16:49.389 "method": "bdev_malloc_create", 00:16:49.389 "params": { 00:16:49.389 "name": "malloc0", 00:16:49.389 "num_blocks": 8192, 00:16:49.389 "block_size": 4096, 00:16:49.389 "physical_block_size": 4096, 00:16:49.389 "uuid": "847ee43a-fddc-4f86-91a6-e757b52b8682", 00:16:49.389 "optimal_io_boundary": 0, 00:16:49.389 "md_size": 0, 00:16:49.389 "dif_type": 0, 00:16:49.389 "dif_is_head_of_md": false, 00:16:49.389 "dif_pi_format": 0 00:16:49.389 } 00:16:49.389 }, 00:16:49.389 { 00:16:49.389 "method": "bdev_wait_for_examine" 00:16:49.389 } 00:16:49.389 ] 00:16:49.389 }, 00:16:49.389 { 00:16:49.390 "subsystem": "nbd", 00:16:49.390 "config": [] 00:16:49.390 }, 00:16:49.390 { 00:16:49.390 "subsystem": "scheduler", 00:16:49.390 "config": [ 00:16:49.390 { 00:16:49.390 "method": "framework_set_scheduler", 00:16:49.390 "params": { 00:16:49.390 "name": "static" 00:16:49.390 } 00:16:49.390 } 00:16:49.390 ] 00:16:49.390 }, 00:16:49.390 { 00:16:49.390 "subsystem": "nvmf", 00:16:49.390 "config": [ 00:16:49.390 { 00:16:49.390 "method": "nvmf_set_config", 00:16:49.390 "params": { 00:16:49.390 "discovery_filter": "match_any", 00:16:49.390 "admin_cmd_passthru": { 00:16:49.390 "identify_ctrlr": false 00:16:49.390 }, 00:16:49.390 "dhchap_digests": [ 00:16:49.390 "sha256", 00:16:49.390 "sha384", 00:16:49.390 "sha512" 00:16:49.390 ], 00:16:49.390 "dhchap_dhgroups": [ 00:16:49.390 "null", 00:16:49.390 "ffdhe2048", 00:16:49.390 "ffdhe3072", 00:16:49.390 "ffdhe4096", 00:16:49.390 "ffdhe6144", 00:16:49.390 "ffdhe8192" 00:16:49.390 ] 00:16:49.390 } 00:16:49.390 }, 00:16:49.390 { 00:16:49.390 "method": "nvmf_set_max_subsystems", 00:16:49.390 "params": { 00:16:49.390 "max_subsystems": 1024 00:16:49.390 } 00:16:49.390 }, 00:16:49.390 { 00:16:49.390 "method": "nvmf_set_crdt", 00:16:49.390 "params": { 00:16:49.390 "crdt1": 0, 00:16:49.390 "crdt2": 0, 00:16:49.390 "crdt3": 0 00:16:49.390 } 00:16:49.390 }, 00:16:49.390 { 00:16:49.390 "method": "nvmf_create_transport", 00:16:49.390 "params": { 00:16:49.390 "trtype": "TCP", 00:16:49.390 "max_queue_depth": 128, 00:16:49.390 "max_io_qpairs_per_ctrlr": 127, 00:16:49.390 "in_capsule_data_size": 4096, 00:16:49.390 "max_io_size": 131072, 00:16:49.390 "io_unit_size": 131072, 00:16:49.390 "max_aq_depth": 128, 00:16:49.390 "num_shared_buffers": 511, 00:16:49.390 "buf_cache_size": 4294967295, 00:16:49.390 "dif_insert_or_strip": false, 00:16:49.390 "zcopy": false, 00:16:49.390 "c2h_success": false, 00:16:49.390 "sock_priority": 0, 00:16:49.390 "abort_timeout_sec": 1, 00:16:49.390 "ack_timeout": 0, 00:16:49.390 "data_wr_pool_size": 0 00:16:49.390 } 00:16:49.390 }, 00:16:49.390 { 00:16:49.390 "method": "nvmf_create_subsystem", 00:16:49.390 "params": { 00:16:49.390 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:49.390 "allow_any_host": false, 00:16:49.390 "serial_number": "00000000000000000000", 00:16:49.390 "model_number": "SPDK bdev Controller", 00:16:49.390 "max_namespaces": 32, 00:16:49.390 "min_cntlid": 1, 00:16:49.390 "max_cntlid": 65519, 00:16:49.390 "ana_reporting": false 00:16:49.390 } 00:16:49.390 }, 00:16:49.390 { 00:16:49.390 "method": "nvmf_subsystem_add_host", 00:16:49.390 "params": { 00:16:49.390 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:49.390 "host": "nqn.2016-06.io.spdk:host1", 00:16:49.390 "psk": "key0" 00:16:49.390 } 00:16:49.390 }, 00:16:49.390 { 00:16:49.390 "method": "nvmf_subsystem_add_ns", 00:16:49.390 "params": { 00:16:49.390 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:49.390 "namespace": { 00:16:49.390 "nsid": 1, 00:16:49.390 "bdev_name": "malloc0", 00:16:49.390 "nguid": "847EE43AFDDC4F8691A6E757B52B8682", 00:16:49.390 "uuid": "847ee43a-fddc-4f86-91a6-e757b52b8682", 00:16:49.390 "no_auto_visible": false 00:16:49.390 } 00:16:49.390 } 00:16:49.390 }, 00:16:49.390 { 00:16:49.390 "method": "nvmf_subsystem_add_listener", 00:16:49.390 "params": { 00:16:49.390 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:49.390 "listen_address": { 00:16:49.390 "trtype": "TCP", 00:16:49.390 "adrfam": "IPv4", 00:16:49.390 "traddr": "10.0.0.3", 00:16:49.390 "trsvcid": "4420" 00:16:49.390 }, 00:16:49.390 "secure_channel": false, 00:16:49.390 "sock_impl": "ssl" 00:16:49.390 } 00:16:49.390 } 00:16:49.390 ] 00:16:49.390 } 00:16:49.390 ] 00:16:49.390 }' 00:16:49.390 19:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:16:49.649 19:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:16:49.649 "subsystems": [ 00:16:49.649 { 00:16:49.649 "subsystem": "keyring", 00:16:49.649 "config": [ 00:16:49.649 { 00:16:49.649 "method": "keyring_file_add_key", 00:16:49.649 "params": { 00:16:49.649 "name": "key0", 00:16:49.649 "path": "/tmp/tmp.BrItUbweve" 00:16:49.649 } 00:16:49.649 } 00:16:49.649 ] 00:16:49.649 }, 00:16:49.649 { 00:16:49.649 "subsystem": "iobuf", 00:16:49.649 "config": [ 00:16:49.649 { 00:16:49.649 "method": "iobuf_set_options", 00:16:49.649 "params": { 00:16:49.649 "small_pool_count": 8192, 00:16:49.649 "large_pool_count": 1024, 00:16:49.649 "small_bufsize": 8192, 00:16:49.649 "large_bufsize": 135168, 00:16:49.649 "enable_numa": false 00:16:49.649 } 00:16:49.649 } 00:16:49.649 ] 00:16:49.649 }, 00:16:49.649 { 00:16:49.649 "subsystem": "sock", 00:16:49.649 "config": [ 00:16:49.649 { 00:16:49.649 "method": "sock_set_default_impl", 00:16:49.649 "params": { 00:16:49.649 "impl_name": "uring" 00:16:49.649 } 00:16:49.649 }, 00:16:49.649 { 00:16:49.649 "method": "sock_impl_set_options", 00:16:49.649 "params": { 00:16:49.649 "impl_name": "ssl", 00:16:49.649 "recv_buf_size": 4096, 00:16:49.649 "send_buf_size": 4096, 00:16:49.649 "enable_recv_pipe": true, 00:16:49.649 "enable_quickack": false, 00:16:49.649 "enable_placement_id": 0, 00:16:49.649 "enable_zerocopy_send_server": true, 00:16:49.649 "enable_zerocopy_send_client": false, 00:16:49.649 "zerocopy_threshold": 0, 00:16:49.649 "tls_version": 0, 00:16:49.649 "enable_ktls": false 00:16:49.650 } 00:16:49.650 }, 00:16:49.650 { 00:16:49.650 "method": "sock_impl_set_options", 00:16:49.650 "params": { 00:16:49.650 "impl_name": "posix", 00:16:49.650 "recv_buf_size": 2097152, 00:16:49.650 "send_buf_size": 2097152, 00:16:49.650 "enable_recv_pipe": true, 00:16:49.650 "enable_quickack": false, 00:16:49.650 "enable_placement_id": 0, 00:16:49.650 "enable_zerocopy_send_server": true, 00:16:49.650 "enable_zerocopy_send_client": false, 00:16:49.650 "zerocopy_threshold": 0, 00:16:49.650 "tls_version": 0, 00:16:49.650 "enable_ktls": false 00:16:49.650 } 00:16:49.650 }, 00:16:49.650 { 00:16:49.650 "method": "sock_impl_set_options", 00:16:49.650 "params": { 00:16:49.650 "impl_name": "uring", 00:16:49.650 "recv_buf_size": 2097152, 00:16:49.650 "send_buf_size": 2097152, 00:16:49.650 "enable_recv_pipe": true, 00:16:49.650 "enable_quickack": false, 00:16:49.650 "enable_placement_id": 0, 00:16:49.650 "enable_zerocopy_send_server": false, 00:16:49.650 "enable_zerocopy_send_client": false, 00:16:49.650 "zerocopy_threshold": 0, 00:16:49.650 "tls_version": 0, 00:16:49.650 "enable_ktls": false 00:16:49.650 } 00:16:49.650 } 00:16:49.650 ] 00:16:49.650 }, 00:16:49.650 { 00:16:49.650 "subsystem": "vmd", 00:16:49.650 "config": [] 00:16:49.650 }, 00:16:49.650 { 00:16:49.650 "subsystem": "accel", 00:16:49.650 "config": [ 00:16:49.650 { 00:16:49.650 "method": "accel_set_options", 00:16:49.650 "params": { 00:16:49.650 "small_cache_size": 128, 00:16:49.650 "large_cache_size": 16, 00:16:49.650 "task_count": 2048, 00:16:49.650 "sequence_count": 2048, 00:16:49.650 "buf_count": 2048 00:16:49.650 } 00:16:49.650 } 00:16:49.650 ] 00:16:49.650 }, 00:16:49.650 { 00:16:49.650 "subsystem": "bdev", 00:16:49.650 "config": [ 00:16:49.650 { 00:16:49.650 "method": "bdev_set_options", 00:16:49.650 "params": { 00:16:49.650 "bdev_io_pool_size": 65535, 00:16:49.650 "bdev_io_cache_size": 256, 00:16:49.650 "bdev_auto_examine": true, 00:16:49.650 "iobuf_small_cache_size": 128, 00:16:49.650 "iobuf_large_cache_size": 16 00:16:49.650 } 00:16:49.650 }, 00:16:49.650 { 00:16:49.650 "method": "bdev_raid_set_options", 00:16:49.650 "params": { 00:16:49.650 "process_window_size_kb": 1024, 00:16:49.650 "process_max_bandwidth_mb_sec": 0 00:16:49.650 } 00:16:49.650 }, 00:16:49.650 { 00:16:49.650 "method": "bdev_iscsi_set_options", 00:16:49.650 "params": { 00:16:49.650 "timeout_sec": 30 00:16:49.650 } 00:16:49.650 }, 00:16:49.650 { 00:16:49.650 "method": "bdev_nvme_set_options", 00:16:49.650 "params": { 00:16:49.650 "action_on_timeout": "none", 00:16:49.650 "timeout_us": 0, 00:16:49.650 "timeout_admin_us": 0, 00:16:49.650 "keep_alive_timeout_ms": 10000, 00:16:49.650 "arbitration_burst": 0, 00:16:49.650 "low_priority_weight": 0, 00:16:49.650 "medium_priority_weight": 0, 00:16:49.650 "high_priority_weight": 0, 00:16:49.650 "nvme_adminq_poll_period_us": 10000, 00:16:49.650 "nvme_ioq_poll_period_us": 0, 00:16:49.650 "io_queue_requests": 512, 00:16:49.650 "delay_cmd_submit": true, 00:16:49.650 "transport_retry_count": 4, 00:16:49.650 "bdev_retry_count": 3, 00:16:49.650 "transport_ack_timeout": 0, 00:16:49.650 "ctrlr_loss_timeout_sec": 0, 00:16:49.650 "reconnect_delay_sec": 0, 00:16:49.650 "fast_io_fail_timeout_sec": 0, 00:16:49.650 "disable_auto_failback": false, 00:16:49.650 "generate_uuids": false, 00:16:49.650 "transport_tos": 0, 00:16:49.650 "nvme_error_stat": false, 00:16:49.650 "rdma_srq_size": 0, 00:16:49.650 "io_path_stat": false, 00:16:49.650 "allow_accel_sequence": false, 00:16:49.650 "rdma_max_cq_size": 0, 00:16:49.650 "rdma_cm_event_timeout_ms": 0, 00:16:49.650 "dhchap_digests": [ 00:16:49.650 "sha256", 00:16:49.650 "sha384", 00:16:49.650 "sha512" 00:16:49.650 ], 00:16:49.650 "dhchap_dhgroups": [ 00:16:49.650 "null", 00:16:49.650 "ffdhe2048", 00:16:49.650 "ffdhe3072", 00:16:49.650 "ffdhe4096", 00:16:49.650 "ffdhe6144", 00:16:49.650 "ffdhe8192" 00:16:49.650 ] 00:16:49.650 } 00:16:49.650 }, 00:16:49.650 { 00:16:49.650 "method": "bdev_nvme_attach_controller", 00:16:49.650 "params": { 00:16:49.650 "name": "nvme0", 00:16:49.650 "trtype": "TCP", 00:16:49.650 "adrfam": "IPv4", 00:16:49.650 "traddr": "10.0.0.3", 00:16:49.650 "trsvcid": "4420", 00:16:49.650 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:49.650 "prchk_reftag": false, 00:16:49.650 "prchk_guard": false, 00:16:49.650 "ctrlr_loss_timeout_sec": 0, 00:16:49.650 "reconnect_delay_sec": 0, 00:16:49.650 "fast_io_fail_timeout_sec": 0, 00:16:49.650 "psk": "key0", 00:16:49.650 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:49.650 "hdgst": false, 00:16:49.650 "ddgst": false, 00:16:49.650 "multipath": "multipath" 00:16:49.650 } 00:16:49.650 }, 00:16:49.650 { 00:16:49.650 "method": "bdev_nvme_set_hotplug", 00:16:49.650 "params": { 00:16:49.650 "period_us": 100000, 00:16:49.650 "enable": false 00:16:49.650 } 00:16:49.650 }, 00:16:49.650 { 00:16:49.650 "method": "bdev_enable_histogram", 00:16:49.650 "params": { 00:16:49.650 "name": "nvme0n1", 00:16:49.650 "enable": true 00:16:49.650 } 00:16:49.650 }, 00:16:49.650 { 00:16:49.650 "method": "bdev_wait_for_examine" 00:16:49.650 } 00:16:49.650 ] 00:16:49.650 }, 00:16:49.650 { 00:16:49.650 "subsystem": "nbd", 00:16:49.650 "config": [] 00:16:49.650 } 00:16:49.650 ] 00:16:49.650 }' 00:16:49.650 19:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 72373 00:16:49.650 19:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72373 ']' 00:16:49.650 19:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72373 00:16:49.650 19:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:49.650 19:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:49.650 19:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72373 00:16:49.650 killing process with pid 72373 00:16:49.650 Received shutdown signal, test time was about 1.000000 seconds 00:16:49.650 00:16:49.650 Latency(us) 00:16:49.650 [2024-12-05T19:36:43.087Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:49.650 [2024-12-05T19:36:43.087Z] =================================================================================================================== 00:16:49.650 [2024-12-05T19:36:43.087Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:49.650 19:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:49.650 19:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:49.650 19:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72373' 00:16:49.650 19:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72373 00:16:49.650 19:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72373 00:16:49.909 19:36:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 72341 00:16:49.909 19:36:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72341 ']' 00:16:49.909 19:36:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72341 00:16:49.909 19:36:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:49.909 19:36:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:49.909 19:36:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72341 00:16:49.909 killing process with pid 72341 00:16:49.909 19:36:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:49.909 19:36:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:49.910 19:36:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72341' 00:16:49.910 19:36:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72341 00:16:49.910 19:36:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72341 00:16:50.169 19:36:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:16:50.169 "subsystems": [ 00:16:50.169 { 00:16:50.169 "subsystem": "keyring", 00:16:50.169 "config": [ 00:16:50.169 { 00:16:50.169 "method": "keyring_file_add_key", 00:16:50.169 "params": { 00:16:50.169 "name": "key0", 00:16:50.169 "path": "/tmp/tmp.BrItUbweve" 00:16:50.169 } 00:16:50.169 } 00:16:50.169 ] 00:16:50.169 }, 00:16:50.169 { 00:16:50.169 "subsystem": "iobuf", 00:16:50.169 "config": [ 00:16:50.169 { 00:16:50.169 "method": "iobuf_set_options", 00:16:50.169 "params": { 00:16:50.169 "small_pool_count": 8192, 00:16:50.169 "large_pool_count": 1024, 00:16:50.169 "small_bufsize": 8192, 00:16:50.169 "large_bufsize": 135168, 00:16:50.169 "enable_numa": false 00:16:50.169 } 00:16:50.169 } 00:16:50.169 ] 00:16:50.169 }, 00:16:50.169 { 00:16:50.169 "subsystem": "sock", 00:16:50.169 "config": [ 00:16:50.169 { 00:16:50.169 "method": "sock_set_default_impl", 00:16:50.169 "params": { 00:16:50.169 "impl_name": "uring" 00:16:50.169 } 00:16:50.169 }, 00:16:50.169 { 00:16:50.169 "method": "sock_impl_set_options", 00:16:50.169 "params": { 00:16:50.169 "impl_name": "ssl", 00:16:50.169 "recv_buf_size": 4096, 00:16:50.169 "send_buf_size": 4096, 00:16:50.169 "enable_recv_pipe": true, 00:16:50.169 "enable_quickack": false, 00:16:50.169 "enable_placement_id": 0, 00:16:50.169 "enable_zerocopy_send_server": true, 00:16:50.169 "enable_zerocopy_send_client": false, 00:16:50.169 "zerocopy_threshold": 0, 00:16:50.169 "tls_version": 0, 00:16:50.169 "enable_ktls": false 00:16:50.169 } 00:16:50.169 }, 00:16:50.169 { 00:16:50.169 "method": "sock_impl_set_options", 00:16:50.169 "params": { 00:16:50.169 "impl_name": "posix", 00:16:50.169 "recv_buf_size": 2097152, 00:16:50.169 "send_buf_size": 2097152, 00:16:50.169 "enable_recv_pipe": true, 00:16:50.169 "enable_quickack": false, 00:16:50.169 "enable_placement_id": 0, 00:16:50.169 "enable_zerocopy_send_server": true, 00:16:50.169 "enable_zerocopy_send_client": false, 00:16:50.169 "zerocopy_threshold": 0, 00:16:50.169 "tls_version": 0, 00:16:50.169 "enable_ktls": false 00:16:50.169 } 00:16:50.169 }, 00:16:50.169 { 00:16:50.169 "method": "sock_impl_set_options", 00:16:50.169 "params": { 00:16:50.169 "impl_name": "uring", 00:16:50.169 "recv_buf_size": 2097152, 00:16:50.169 "send_buf_size": 2097152, 00:16:50.169 "enable_recv_pipe": true, 00:16:50.169 "enable_quickack": false, 00:16:50.169 "enable_placement_id": 0, 00:16:50.169 "enable_zerocopy_send_server": false, 00:16:50.169 "enable_zerocopy_send_client": false, 00:16:50.169 "zerocopy_threshold": 0, 00:16:50.169 "tls_version": 0, 00:16:50.169 "enable_ktls": false 00:16:50.169 } 00:16:50.169 } 00:16:50.169 ] 00:16:50.169 }, 00:16:50.169 { 00:16:50.169 "subsystem": "vmd", 00:16:50.169 "config": [] 00:16:50.169 }, 00:16:50.169 { 00:16:50.169 "subsystem": "accel", 00:16:50.169 "config": [ 00:16:50.169 { 00:16:50.169 "method": "accel_set_options", 00:16:50.169 "params": { 00:16:50.169 "small_cache_size": 128, 00:16:50.169 "large_cache_size": 16, 00:16:50.169 "task_count": 2048, 00:16:50.169 "sequence_count": 2048, 00:16:50.169 "buf_count": 2048 00:16:50.169 } 00:16:50.169 } 00:16:50.169 ] 00:16:50.169 }, 00:16:50.169 { 00:16:50.169 "subsystem": "bdev", 00:16:50.169 "config": [ 00:16:50.169 { 00:16:50.169 "method": "bdev_set_options", 00:16:50.169 "params": { 00:16:50.169 "bdev_io_pool_size": 65535, 00:16:50.169 "bdev_io_cache_size": 256, 00:16:50.169 "bdev_auto_examine": true, 00:16:50.169 "iobuf_small_cache_size": 128, 00:16:50.169 "iobuf_large_cache_size": 16 00:16:50.169 } 00:16:50.169 }, 00:16:50.169 { 00:16:50.169 "method": "bdev_raid_set_options", 00:16:50.169 "params": { 00:16:50.169 "process_window_size_kb": 1024, 00:16:50.169 "process_max_bandwidth_mb_sec": 0 00:16:50.169 } 00:16:50.169 }, 00:16:50.169 { 00:16:50.169 "method": "bdev_iscsi_set_options", 00:16:50.169 "params": { 00:16:50.169 "timeout_sec": 30 00:16:50.169 } 00:16:50.169 }, 00:16:50.169 { 00:16:50.169 "method": "bdev_nvme_set_options", 00:16:50.169 "params": { 00:16:50.169 "action_on_timeout": "none", 00:16:50.169 "timeout_us": 0, 00:16:50.169 "timeout_admin_us": 0, 00:16:50.169 "keep_alive_timeout_ms": 10000, 00:16:50.169 "arbitration_burst": 0, 00:16:50.170 "low_priority_weight": 0, 00:16:50.170 "medium_priority_weight": 0, 00:16:50.170 "high_priority_weight": 0, 00:16:50.170 "nvme_adminq_poll_period_us": 10000, 00:16:50.170 "nvme_ioq_poll_period_us": 0, 00:16:50.170 "io_queue_requests": 0, 00:16:50.170 "delay_cmd_submit": true, 00:16:50.170 "transport_retry_count": 4, 00:16:50.170 "bdev_retry_count": 3, 00:16:50.170 "transport_ack_timeout": 0, 00:16:50.170 "ctrlr_loss_timeout_sec": 0, 00:16:50.170 "reconnect_delay_sec": 0, 00:16:50.170 "fast_io_fail_timeout_sec": 0, 00:16:50.170 "disable_auto_failback": false, 00:16:50.170 "generate_uuids": false, 00:16:50.170 "transport_tos": 0, 00:16:50.170 "nvme_error_stat": false, 00:16:50.170 "rdma_srq_size": 0, 00:16:50.170 "io_path_stat": false, 00:16:50.170 "allow_accel_sequence": false, 00:16:50.170 "rdma_max_cq_size": 0, 00:16:50.170 "rdma_cm_event_timeout_ms": 0, 00:16:50.170 "dhchap_digests": [ 00:16:50.170 "sha256", 00:16:50.170 "sha384", 00:16:50.170 "sha512" 00:16:50.170 ], 00:16:50.170 "dhchap_dhgroups": [ 00:16:50.170 "null", 00:16:50.170 "ffdhe2048", 00:16:50.170 "ffdhe3072", 00:16:50.170 "ffdhe4096", 00:16:50.170 "ffdhe6144", 00:16:50.170 "ffdhe8192" 00:16:50.170 ] 00:16:50.170 } 00:16:50.170 }, 00:16:50.170 { 00:16:50.170 "method": "bdev_nvme_set_hotplug", 00:16:50.170 "params": { 00:16:50.170 "period_us": 100000, 00:16:50.170 "enable": false 00:16:50.170 } 00:16:50.170 }, 00:16:50.170 { 00:16:50.170 "method": "bdev_malloc_create", 00:16:50.170 "params": { 00:16:50.170 "name": "malloc0", 00:16:50.170 "num_blocks": 8192, 00:16:50.170 "block_size": 4096, 00:16:50.170 "physical_block_size": 4096, 00:16:50.170 "uuid": "847ee43a-fddc-4f86-91a6-e757b52b8682", 00:16:50.170 "optimal_io_boundary": 0, 00:16:50.170 "md_size": 0, 00:16:50.170 "dif_type": 0, 00:16:50.170 "dif_is_head_of_md": false, 00:16:50.170 "dif_pi_format": 0 00:16:50.170 } 00:16:50.170 }, 00:16:50.170 { 00:16:50.170 "method": "bdev_wait_for_examine" 00:16:50.170 } 00:16:50.170 ] 00:16:50.170 }, 00:16:50.170 { 00:16:50.170 "subsystem": "nbd", 00:16:50.170 "config": [] 00:16:50.170 }, 00:16:50.170 { 00:16:50.170 "subsystem": "scheduler", 00:16:50.170 "config": [ 00:16:50.170 { 00:16:50.170 "method": "framework_set_scheduler", 00:16:50.170 "params": { 00:16:50.170 "name": "static" 00:16:50.170 } 00:16:50.170 } 00:16:50.170 ] 00:16:50.170 }, 00:16:50.170 { 00:16:50.170 "subsystem": "nvmf", 00:16:50.170 "config": [ 00:16:50.170 { 00:16:50.170 "method": "nvmf_set_config", 00:16:50.170 "params": { 00:16:50.170 "discovery_filter": "match_any", 00:16:50.170 "admin_cmd_passthru": { 00:16:50.170 "identify_ctrlr": false 00:16:50.170 }, 00:16:50.170 "dhchap_digests": [ 00:16:50.170 "sha256", 00:16:50.170 "sha384", 00:16:50.170 "sha512" 00:16:50.170 ], 00:16:50.170 "dhchap_dhgroups": [ 00:16:50.170 "null", 00:16:50.170 "ffdhe2048", 00:16:50.170 "ffdhe3072", 00:16:50.170 "ffdhe4096", 00:16:50.170 "ffdhe6144", 00:16:50.170 "ffdhe8192" 00:16:50.170 ] 00:16:50.170 } 00:16:50.170 }, 00:16:50.170 { 00:16:50.170 "method": "nvmf_set_max_subsystems", 00:16:50.170 "params": { 00:16:50.170 "max_subsystems": 1024 00:16:50.170 } 00:16:50.170 }, 00:16:50.170 { 00:16:50.170 "method": "nvmf_set_crdt", 00:16:50.170 "params": { 00:16:50.170 "crdt1": 0, 00:16:50.170 "crdt2": 0, 00:16:50.170 "crdt3": 0 00:16:50.170 } 00:16:50.170 }, 00:16:50.170 { 00:16:50.170 "method": "nvmf_create_transport", 00:16:50.170 "params": { 00:16:50.170 "trtype": "TCP", 00:16:50.170 "max_queue_depth": 128, 00:16:50.170 "max_io_qpairs_per_ctrlr": 127, 00:16:50.170 "in_capsule_data_size": 4096, 00:16:50.170 "max_io_size": 131072, 00:16:50.170 "io_unit_size": 131072, 00:16:50.170 "max_aq_depth": 128, 00:16:50.170 "num_shared_buffers": 511, 00:16:50.170 "buf_cache_size": 4294967295, 00:16:50.170 "dif_insert_or_strip": false, 00:16:50.170 "zcopy": false, 00:16:50.170 "c2h_success": false, 00:16:50.170 "sock_priority": 0, 00:16:50.170 "abort_timeout_sec": 1, 00:16:50.170 "ack_timeout": 0, 00:16:50.170 "data_wr_pool_size": 0 00:16:50.170 } 00:16:50.170 }, 00:16:50.170 { 00:16:50.170 "method": "nvmf_create_subsystem", 00:16:50.170 "params": { 00:16:50.170 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:50.170 "allow_any_host": false, 00:16:50.170 "serial_number": "00000000000000000000", 00:16:50.170 "model_number": "SPDK bdev Controller", 00:16:50.170 "max_namespaces": 32, 00:16:50.170 "min_cntlid": 1, 00:16:50.170 "max_cntlid": 65519, 00:16:50.170 "ana_reporting": false 00:16:50.170 } 00:16:50.170 }, 00:16:50.170 { 00:16:50.170 "method": "nvmf_subsystem_add_host", 00:16:50.170 "params": { 00:16:50.170 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:50.170 "host": "nqn.2016-06.io.spdk:host1", 00:16:50.170 "psk": "key0" 00:16:50.170 } 00:16:50.170 }, 00:16:50.170 { 00:16:50.170 "method": "nvmf_subsystem_add_ns", 00:16:50.170 "params": { 00:16:50.170 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:50.170 "namespace": { 00:16:50.170 "nsid": 1, 00:16:50.170 "bdev_name": "malloc0", 00:16:50.170 "nguid": "847EE43AFDDC4F8691A6E757B52B8682", 00:16:50.170 "uuid": "847ee43a-fddc-4f86-91a6-e757b52b8682", 00:16:50.170 "no_auto_visible": false 00:16:50.170 } 00:16:50.170 } 00:16:50.170 }, 00:16:50.170 { 00:16:50.170 "method": "nvmf_subsystem_add_listener", 00:16:50.170 "params": { 00:16:50.170 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:50.170 "listen_address": { 00:16:50.170 "trtype": "TCP", 00:16:50.170 "adrfam": "IPv4", 00:16:50.170 "traddr": "10.0.0.3", 00:16:50.170 "trsvcid": "4420" 00:16:50.170 }, 00:16:50.170 "secure_channel": false, 00:16:50.170 "sock_impl": "ssl" 00:16:50.170 } 00:16:50.170 } 00:16:50.170 ] 00:16:50.170 } 00:16:50.170 ] 00:16:50.170 }' 00:16:50.170 19:36:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:16:50.170 19:36:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:50.170 19:36:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:50.170 19:36:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:50.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:50.170 19:36:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72425 00:16:50.170 19:36:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:16:50.170 19:36:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72425 00:16:50.170 19:36:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72425 ']' 00:16:50.170 19:36:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:50.170 19:36:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:50.170 19:36:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:50.170 19:36:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:50.170 19:36:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:50.170 [2024-12-05 19:36:43.457289] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:16:50.170 [2024-12-05 19:36:43.457575] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:50.430 [2024-12-05 19:36:43.610197] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:50.430 [2024-12-05 19:36:43.668130] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:50.430 [2024-12-05 19:36:43.668519] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:50.430 [2024-12-05 19:36:43.668725] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:50.430 [2024-12-05 19:36:43.668854] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:50.430 [2024-12-05 19:36:43.668887] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:50.430 [2024-12-05 19:36:43.669463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:50.430 [2024-12-05 19:36:43.838385] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:50.689 [2024-12-05 19:36:43.920383] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:50.689 [2024-12-05 19:36:43.952340] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:50.689 [2024-12-05 19:36:43.952743] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:51.256 19:36:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:51.256 19:36:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:51.256 19:36:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:51.256 19:36:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:51.256 19:36:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:51.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:51.256 19:36:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:51.256 19:36:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=72457 00:16:51.256 19:36:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 72457 /var/tmp/bdevperf.sock 00:16:51.256 19:36:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72457 ']' 00:16:51.256 19:36:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:51.256 19:36:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:51.256 19:36:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:51.256 19:36:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:51.256 19:36:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:51.256 19:36:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:16:51.256 19:36:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:16:51.256 "subsystems": [ 00:16:51.256 { 00:16:51.256 "subsystem": "keyring", 00:16:51.256 "config": [ 00:16:51.256 { 00:16:51.256 "method": "keyring_file_add_key", 00:16:51.256 "params": { 00:16:51.256 "name": "key0", 00:16:51.256 "path": "/tmp/tmp.BrItUbweve" 00:16:51.256 } 00:16:51.256 } 00:16:51.256 ] 00:16:51.256 }, 00:16:51.256 { 00:16:51.256 "subsystem": "iobuf", 00:16:51.256 "config": [ 00:16:51.256 { 00:16:51.256 "method": "iobuf_set_options", 00:16:51.256 "params": { 00:16:51.256 "small_pool_count": 8192, 00:16:51.256 "large_pool_count": 1024, 00:16:51.256 "small_bufsize": 8192, 00:16:51.256 "large_bufsize": 135168, 00:16:51.256 "enable_numa": false 00:16:51.256 } 00:16:51.256 } 00:16:51.256 ] 00:16:51.256 }, 00:16:51.256 { 00:16:51.256 "subsystem": "sock", 00:16:51.256 "config": [ 00:16:51.256 { 00:16:51.256 "method": "sock_set_default_impl", 00:16:51.256 "params": { 00:16:51.256 "impl_name": "uring" 00:16:51.256 } 00:16:51.256 }, 00:16:51.256 { 00:16:51.256 "method": "sock_impl_set_options", 00:16:51.256 "params": { 00:16:51.256 "impl_name": "ssl", 00:16:51.256 "recv_buf_size": 4096, 00:16:51.256 "send_buf_size": 4096, 00:16:51.256 "enable_recv_pipe": true, 00:16:51.256 "enable_quickack": false, 00:16:51.256 "enable_placement_id": 0, 00:16:51.256 "enable_zerocopy_send_server": true, 00:16:51.256 "enable_zerocopy_send_client": false, 00:16:51.256 "zerocopy_threshold": 0, 00:16:51.256 "tls_version": 0, 00:16:51.256 "enable_ktls": false 00:16:51.256 } 00:16:51.256 }, 00:16:51.256 { 00:16:51.256 "method": "sock_impl_set_options", 00:16:51.257 "params": { 00:16:51.257 "impl_name": "posix", 00:16:51.257 "recv_buf_size": 2097152, 00:16:51.257 "send_buf_size": 2097152, 00:16:51.257 "enable_recv_pipe": true, 00:16:51.257 "enable_quickack": false, 00:16:51.257 "enable_placement_id": 0, 00:16:51.257 "enable_zerocopy_send_server": true, 00:16:51.257 "enable_zerocopy_send_client": false, 00:16:51.257 "zerocopy_threshold": 0, 00:16:51.257 "tls_version": 0, 00:16:51.257 "enable_ktls": false 00:16:51.257 } 00:16:51.257 }, 00:16:51.257 { 00:16:51.257 "method": "sock_impl_set_options", 00:16:51.257 "params": { 00:16:51.257 "impl_name": "uring", 00:16:51.257 "recv_buf_size": 2097152, 00:16:51.257 "send_buf_size": 2097152, 00:16:51.257 "enable_recv_pipe": true, 00:16:51.257 "enable_quickack": false, 00:16:51.257 "enable_placement_id": 0, 00:16:51.257 "enable_zerocopy_send_server": false, 00:16:51.257 "enable_zerocopy_send_client": false, 00:16:51.257 "zerocopy_threshold": 0, 00:16:51.257 "tls_version": 0, 00:16:51.257 "enable_ktls": false 00:16:51.257 } 00:16:51.257 } 00:16:51.257 ] 00:16:51.257 }, 00:16:51.257 { 00:16:51.257 "subsystem": "vmd", 00:16:51.257 "config": [] 00:16:51.257 }, 00:16:51.257 { 00:16:51.257 "subsystem": "accel", 00:16:51.257 "config": [ 00:16:51.257 { 00:16:51.257 "method": "accel_set_options", 00:16:51.257 "params": { 00:16:51.257 "small_cache_size": 128, 00:16:51.257 "large_cache_size": 16, 00:16:51.257 "task_count": 2048, 00:16:51.257 "sequence_count": 2048, 00:16:51.257 "buf_count": 2048 00:16:51.257 } 00:16:51.257 } 00:16:51.257 ] 00:16:51.257 }, 00:16:51.257 { 00:16:51.257 "subsystem": "bdev", 00:16:51.257 "config": [ 00:16:51.257 { 00:16:51.257 "method": "bdev_set_options", 00:16:51.257 "params": { 00:16:51.257 "bdev_io_pool_size": 65535, 00:16:51.257 "bdev_io_cache_size": 256, 00:16:51.257 "bdev_auto_examine": true, 00:16:51.257 "iobuf_small_cache_size": 128, 00:16:51.257 "iobuf_large_cache_size": 16 00:16:51.257 } 00:16:51.257 }, 00:16:51.257 { 00:16:51.257 "method": "bdev_raid_set_options", 00:16:51.257 "params": { 00:16:51.257 "process_window_size_kb": 1024, 00:16:51.257 "process_max_bandwidth_mb_sec": 0 00:16:51.257 } 00:16:51.257 }, 00:16:51.257 { 00:16:51.257 "method": "bdev_iscsi_set_options", 00:16:51.257 "params": { 00:16:51.257 "timeout_sec": 30 00:16:51.257 } 00:16:51.257 }, 00:16:51.257 { 00:16:51.257 "method": "bdev_nvme_set_options", 00:16:51.257 "params": { 00:16:51.257 "action_on_timeout": "none", 00:16:51.257 "timeout_us": 0, 00:16:51.257 "timeout_admin_us": 0, 00:16:51.257 "keep_alive_timeout_ms": 10000, 00:16:51.257 "arbitration_burst": 0, 00:16:51.257 "low_priority_weight": 0, 00:16:51.257 "medium_priority_weight": 0, 00:16:51.257 "high_priority_weight": 0, 00:16:51.257 "nvme_adminq_poll_period_us": 10000, 00:16:51.257 "nvme_ioq_poll_period_us": 0, 00:16:51.257 "io_queue_requests": 512, 00:16:51.257 "delay_cmd_submit": true, 00:16:51.257 "transport_retry_count": 4, 00:16:51.257 "bdev_retry_count": 3, 00:16:51.257 "transport_ack_timeout": 0, 00:16:51.257 "ctrlr_loss_timeout_sec": 0, 00:16:51.257 "reconnect_delay_sec": 0, 00:16:51.257 "fast_io_fail_timeout_sec": 0, 00:16:51.257 "disable_auto_failback": false, 00:16:51.257 "generate_uuids": false, 00:16:51.257 "transport_tos": 0, 00:16:51.257 "nvme_error_stat": false, 00:16:51.257 "rdma_srq_size": 0, 00:16:51.257 "io_path_stat": false, 00:16:51.257 "allow_accel_sequence": false, 00:16:51.257 "rdma_max_cq_size": 0, 00:16:51.257 "rdma_cm_event_timeout_ms": 0, 00:16:51.257 "dhchap_digests": [ 00:16:51.257 "sha256", 00:16:51.257 "sha384", 00:16:51.257 "sha512" 00:16:51.257 ], 00:16:51.257 "dhchap_dhgroups": [ 00:16:51.257 "null", 00:16:51.257 "ffdhe2048", 00:16:51.257 "ffdhe3072", 00:16:51.257 "ffdhe4096", 00:16:51.257 "ffdhe6144", 00:16:51.257 "ffdhe8192" 00:16:51.257 ] 00:16:51.257 } 00:16:51.257 }, 00:16:51.257 { 00:16:51.257 "method": "bdev_nvme_attach_controller", 00:16:51.257 "params": { 00:16:51.257 "name": "nvme0", 00:16:51.257 "trtype": "TCP", 00:16:51.257 "adrfam": "IPv4", 00:16:51.257 "traddr": "10.0.0.3", 00:16:51.257 "trsvcid": "4420", 00:16:51.257 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:51.257 "prchk_reftag": false, 00:16:51.257 "prchk_guard": false, 00:16:51.257 "ctrlr_loss_timeout_sec": 0, 00:16:51.257 "reconnect_delay_sec": 0, 00:16:51.257 "fast_io_fail_timeout_sec": 0, 00:16:51.257 "psk": "key0", 00:16:51.257 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:51.257 "hdgst": false, 00:16:51.257 "ddgst": false, 00:16:51.257 "multipath": "multipath" 00:16:51.257 } 00:16:51.257 }, 00:16:51.257 { 00:16:51.257 "method": "bdev_nvme_set_hotplug", 00:16:51.257 "params": { 00:16:51.257 "period_us": 100000, 00:16:51.257 "enable": false 00:16:51.257 } 00:16:51.257 }, 00:16:51.257 { 00:16:51.257 "method": "bdev_enable_histogram", 00:16:51.257 "params": { 00:16:51.257 "name": "nvme0n1", 00:16:51.257 "enable": true 00:16:51.257 } 00:16:51.257 }, 00:16:51.257 { 00:16:51.257 "method": "bdev_wait_for_examine" 00:16:51.257 } 00:16:51.257 ] 00:16:51.257 }, 00:16:51.257 { 00:16:51.257 "subsystem": "nbd", 00:16:51.257 "config": [] 00:16:51.257 } 00:16:51.257 ] 00:16:51.257 }' 00:16:51.257 [2024-12-05 19:36:44.613222] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:16:51.257 [2024-12-05 19:36:44.613622] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72457 ] 00:16:51.516 [2024-12-05 19:36:44.776703] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:51.516 [2024-12-05 19:36:44.834085] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:51.781 [2024-12-05 19:36:44.970748] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:51.781 [2024-12-05 19:36:45.020717] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:52.347 19:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:52.347 19:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:52.347 19:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:52.347 19:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:16:52.605 19:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.605 19:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:52.605 Running I/O for 1 seconds... 00:16:53.980 4352.00 IOPS, 17.00 MiB/s 00:16:53.980 Latency(us) 00:16:53.980 [2024-12-05T19:36:47.417Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:53.980 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:53.980 Verification LBA range: start 0x0 length 0x2000 00:16:53.980 nvme0n1 : 1.02 4410.38 17.23 0.00 0.00 28727.67 10664.49 21924.77 00:16:53.980 [2024-12-05T19:36:47.417Z] =================================================================================================================== 00:16:53.980 [2024-12-05T19:36:47.417Z] Total : 4410.38 17.23 0.00 0.00 28727.67 10664.49 21924.77 00:16:53.980 { 00:16:53.980 "results": [ 00:16:53.980 { 00:16:53.980 "job": "nvme0n1", 00:16:53.980 "core_mask": "0x2", 00:16:53.980 "workload": "verify", 00:16:53.980 "status": "finished", 00:16:53.980 "verify_range": { 00:16:53.980 "start": 0, 00:16:53.980 "length": 8192 00:16:53.980 }, 00:16:53.980 "queue_depth": 128, 00:16:53.980 "io_size": 4096, 00:16:53.980 "runtime": 1.015786, 00:16:53.980 "iops": 4410.377776421412, 00:16:53.980 "mibps": 17.22803818914614, 00:16:53.980 "io_failed": 0, 00:16:53.980 "io_timeout": 0, 00:16:53.980 "avg_latency_us": 28727.668363636363, 00:16:53.980 "min_latency_us": 10664.494545454545, 00:16:53.980 "max_latency_us": 21924.77090909091 00:16:53.980 } 00:16:53.980 ], 00:16:53.980 "core_count": 1 00:16:53.980 } 00:16:53.980 19:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:16:53.980 19:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:16:53.980 19:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:16:53.980 19:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:16:53.980 19:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:16:53.980 19:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:16:53.980 19:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:16:53.980 19:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:16:53.980 19:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:16:53.980 19:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:16:53.980 19:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:16:53.980 nvmf_trace.0 00:16:53.980 19:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:16:53.980 19:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 72457 00:16:53.980 19:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72457 ']' 00:16:53.980 19:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72457 00:16:53.980 19:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:53.980 19:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:53.980 19:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72457 00:16:53.980 killing process with pid 72457 00:16:53.980 Received shutdown signal, test time was about 1.000000 seconds 00:16:53.980 00:16:53.980 Latency(us) 00:16:53.980 [2024-12-05T19:36:47.417Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:53.980 [2024-12-05T19:36:47.417Z] =================================================================================================================== 00:16:53.980 [2024-12-05T19:36:47.417Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:53.980 19:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:53.980 19:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:53.980 19:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72457' 00:16:53.980 19:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72457 00:16:53.980 19:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72457 00:16:53.980 19:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:16:53.980 19:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:53.980 19:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:16:53.980 19:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:53.980 19:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:16:53.980 19:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:53.980 19:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:53.980 rmmod nvme_tcp 00:16:53.980 rmmod nvme_fabrics 00:16:54.239 rmmod nvme_keyring 00:16:54.239 19:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:54.239 19:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:16:54.239 19:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:16:54.239 19:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 72425 ']' 00:16:54.239 19:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 72425 00:16:54.239 19:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72425 ']' 00:16:54.239 19:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72425 00:16:54.239 19:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:54.239 19:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:54.239 19:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72425 00:16:54.239 killing process with pid 72425 00:16:54.239 19:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:54.239 19:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:54.239 19:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72425' 00:16:54.239 19:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72425 00:16:54.239 19:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72425 00:16:54.498 19:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:54.498 19:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:54.498 19:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:54.498 19:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:16:54.498 19:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:54.498 19:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:16:54.498 19:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:16:54.498 19:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:54.498 19:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:54.498 19:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:54.498 19:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:54.498 19:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:54.498 19:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:54.498 19:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:54.498 19:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:54.498 19:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:54.498 19:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:54.498 19:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:54.498 19:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:54.498 19:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:54.498 19:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:54.498 19:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:54.498 19:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:54.498 19:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:54.498 19:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:54.498 19:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:54.757 19:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@300 -- # return 0 00:16:54.757 19:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.tlacv7ULS5 /tmp/tmp.tMu1cCdqw3 /tmp/tmp.BrItUbweve 00:16:54.757 00:16:54.757 real 1m27.047s 00:16:54.757 user 2m21.954s 00:16:54.757 sys 0m27.181s 00:16:54.757 19:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:54.757 ************************************ 00:16:54.757 END TEST nvmf_tls 00:16:54.757 19:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:54.757 ************************************ 00:16:54.757 19:36:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:16:54.757 19:36:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:54.757 19:36:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:54.757 19:36:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:54.757 ************************************ 00:16:54.757 START TEST nvmf_fips 00:16:54.757 ************************************ 00:16:54.757 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:16:54.757 * Looking for test storage... 00:16:54.757 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:16:54.757 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:54.757 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lcov --version 00:16:54.757 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:54.757 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:54.757 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:54.757 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:54.757 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:54.757 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:16:54.757 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:16:54.757 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:16:54.757 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:16:54.757 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:16:54.757 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:16:54.757 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:16:54.757 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:54.757 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:16:54.757 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:16:54.757 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:54.758 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:54.758 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:16:54.758 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:16:54.758 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:54.758 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:16:54.758 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:16:54.758 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:16:55.018 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:16:55.018 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:55.018 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:16:55.018 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:16:55.018 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:55.018 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:55.018 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:16:55.018 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:55.018 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:55.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:55.018 --rc genhtml_branch_coverage=1 00:16:55.018 --rc genhtml_function_coverage=1 00:16:55.018 --rc genhtml_legend=1 00:16:55.018 --rc geninfo_all_blocks=1 00:16:55.018 --rc geninfo_unexecuted_blocks=1 00:16:55.018 00:16:55.018 ' 00:16:55.018 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:55.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:55.018 --rc genhtml_branch_coverage=1 00:16:55.018 --rc genhtml_function_coverage=1 00:16:55.018 --rc genhtml_legend=1 00:16:55.018 --rc geninfo_all_blocks=1 00:16:55.018 --rc geninfo_unexecuted_blocks=1 00:16:55.018 00:16:55.018 ' 00:16:55.018 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:55.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:55.018 --rc genhtml_branch_coverage=1 00:16:55.018 --rc genhtml_function_coverage=1 00:16:55.018 --rc genhtml_legend=1 00:16:55.018 --rc geninfo_all_blocks=1 00:16:55.018 --rc geninfo_unexecuted_blocks=1 00:16:55.018 00:16:55.018 ' 00:16:55.018 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:55.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:55.018 --rc genhtml_branch_coverage=1 00:16:55.018 --rc genhtml_function_coverage=1 00:16:55.018 --rc genhtml_legend=1 00:16:55.018 --rc geninfo_all_blocks=1 00:16:55.018 --rc geninfo_unexecuted_blocks=1 00:16:55.018 00:16:55.018 ' 00:16:55.018 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:55.018 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:16:55.018 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:55.018 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:55.018 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:55.018 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:55.018 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:55.018 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:55.018 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:55.018 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:55.018 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:55.018 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:55.018 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:16:55.018 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:16:55.018 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:55.018 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:55.018 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:55.018 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:55.018 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:55.018 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:16:55.018 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:55.018 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:55.018 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:55.018 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:55.018 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:55.018 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:55.018 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:16:55.018 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:55.018 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:16:55.018 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:55.018 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:55.018 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:55.018 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:55.018 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:55.019 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:55.019 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:55.019 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:55.019 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:55.019 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:55.019 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:55.019 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:16:55.019 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:16:55.019 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:16:55.019 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:16:55.019 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:16:55.019 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:16:55.019 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:55.019 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:55.019 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:16:55.019 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:16:55.019 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:16:55.019 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:16:55.019 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:16:55.019 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:16:55.019 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:16:55.019 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:55.019 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:16:55.019 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:16:55.019 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:55.019 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:55.019 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:16:55.019 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:16:55.019 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:16:55.019 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:16:55.019 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:16:55.019 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:16:55.019 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:16:55.019 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:16:55.019 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:16:55.019 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:16:55.019 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:55.019 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:55.019 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:16:55.019 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:55.019 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:16:55.019 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:16:55.019 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:55.019 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:16:55.019 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:16:55.019 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:16:55.019 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:16:55.019 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:16:55.019 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:16:55.019 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:16:55.019 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:55.019 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:16:55.019 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:16:55.019 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:16:55.019 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:16:55.019 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:16:55.019 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:16:55.019 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:16:55.019 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:16:55.019 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:16:55.019 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:16:55.019 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:16:55.019 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:16:55.019 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:16:55.019 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:16:55.019 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:16:55.019 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:16:55.019 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:16:55.019 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:16:55.019 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:16:55.019 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:16:55.019 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:16:55.019 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:16:55.019 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:16:55.019 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:16:55.019 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:16:55.019 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:55.019 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:16:55.019 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:55.019 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:16:55.019 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:55.019 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:16:55.019 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:16:55.019 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:16:55.019 Error setting digest 00:16:55.019 40E2F4A29D7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:16:55.019 40E2F4A29D7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:16:55.019 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:16:55.019 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:55.019 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:55.019 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:55.019 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:16:55.019 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:55.019 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:55.019 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:55.019 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:55.019 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:55.019 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:55.019 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:55.019 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:55.019 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:55.019 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:55.019 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:55.019 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:55.019 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:55.019 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:55.019 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:55.019 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:55.019 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:55.019 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:55.019 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:55.019 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:55.019 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:55.019 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:55.019 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:55.019 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:55.020 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:55.020 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:55.020 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:55.020 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:55.020 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:55.020 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:55.020 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:55.020 Cannot find device "nvmf_init_br" 00:16:55.020 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # true 00:16:55.020 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:55.020 Cannot find device "nvmf_init_br2" 00:16:55.020 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # true 00:16:55.020 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:55.020 Cannot find device "nvmf_tgt_br" 00:16:55.020 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # true 00:16:55.020 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:55.020 Cannot find device "nvmf_tgt_br2" 00:16:55.020 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # true 00:16:55.020 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:55.020 Cannot find device "nvmf_init_br" 00:16:55.020 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # true 00:16:55.020 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:55.279 Cannot find device "nvmf_init_br2" 00:16:55.279 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # true 00:16:55.279 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:55.279 Cannot find device "nvmf_tgt_br" 00:16:55.279 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # true 00:16:55.279 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:55.279 Cannot find device "nvmf_tgt_br2" 00:16:55.279 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # true 00:16:55.279 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:55.279 Cannot find device "nvmf_br" 00:16:55.279 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # true 00:16:55.279 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:55.279 Cannot find device "nvmf_init_if" 00:16:55.279 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # true 00:16:55.279 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:55.279 Cannot find device "nvmf_init_if2" 00:16:55.279 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # true 00:16:55.279 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:55.279 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:55.279 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # true 00:16:55.279 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:55.279 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:55.279 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # true 00:16:55.279 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:55.280 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:55.280 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:55.280 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:55.280 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:55.280 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:55.280 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:55.280 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:55.280 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:55.280 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:55.280 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:55.280 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:55.280 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:55.280 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:55.280 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:55.280 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:55.280 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:55.280 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:55.280 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:55.280 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:55.539 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:55.539 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:55.539 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:55.539 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:55.539 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:55.539 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:55.539 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:55.539 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:55.539 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:55.539 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:55.539 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:55.539 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:55.539 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:55.539 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:55.539 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:16:55.539 00:16:55.539 --- 10.0.0.3 ping statistics --- 00:16:55.539 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:55.539 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:16:55.539 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:55.539 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:55.539 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.086 ms 00:16:55.539 00:16:55.539 --- 10.0.0.4 ping statistics --- 00:16:55.539 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:55.539 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:16:55.539 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:55.539 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:55.539 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:16:55.539 00:16:55.540 --- 10.0.0.1 ping statistics --- 00:16:55.540 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:55.540 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:16:55.540 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:55.540 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:55.540 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:16:55.540 00:16:55.540 --- 10.0.0.2 ping statistics --- 00:16:55.540 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:55.540 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:16:55.540 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:55.540 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@461 -- # return 0 00:16:55.540 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:55.540 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:55.540 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:55.540 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:55.540 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:55.540 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:55.540 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:55.540 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:16:55.540 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:55.540 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:55.540 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:16:55.540 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=72772 00:16:55.540 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:55.540 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 72772 00:16:55.540 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 72772 ']' 00:16:55.540 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:55.540 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:55.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:55.540 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:55.540 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:55.540 19:36:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:16:55.540 [2024-12-05 19:36:48.929146] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:16:55.540 [2024-12-05 19:36:48.929917] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:55.799 [2024-12-05 19:36:49.086728] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:55.799 [2024-12-05 19:36:49.143894] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:55.799 [2024-12-05 19:36:49.143954] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:55.799 [2024-12-05 19:36:49.143979] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:55.799 [2024-12-05 19:36:49.143997] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:55.799 [2024-12-05 19:36:49.144006] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:55.799 [2024-12-05 19:36:49.144485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:55.799 [2024-12-05 19:36:49.204335] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:56.736 19:36:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:56.736 19:36:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:16:56.736 19:36:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:56.736 19:36:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:56.736 19:36:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:16:56.736 19:36:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:56.736 19:36:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:16:56.736 19:36:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:16:56.736 19:36:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:16:56.736 19:36:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.rBK 00:16:56.736 19:36:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:16:56.736 19:36:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.rBK 00:16:56.736 19:36:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.rBK 00:16:56.736 19:36:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.rBK 00:16:56.736 19:36:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:56.995 [2024-12-05 19:36:50.233202] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:56.995 [2024-12-05 19:36:50.249116] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:56.995 [2024-12-05 19:36:50.249322] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:56.995 malloc0 00:16:56.995 19:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:56.995 19:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=72818 00:16:56.995 19:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:56.995 19:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 72818 /var/tmp/bdevperf.sock 00:16:56.995 19:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 72818 ']' 00:16:56.995 19:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:56.995 19:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:56.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:56.995 19:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:56.995 19:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:56.995 19:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:16:56.995 [2024-12-05 19:36:50.378608] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:16:56.995 [2024-12-05 19:36:50.378710] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72818 ] 00:16:57.254 [2024-12-05 19:36:50.528494] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:57.254 [2024-12-05 19:36:50.597374] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:57.254 [2024-12-05 19:36:50.659251] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:57.513 19:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:57.513 19:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:16:57.513 19:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.rBK 00:16:57.771 19:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:16:58.030 [2024-12-05 19:36:51.234232] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:58.030 TLSTESTn1 00:16:58.030 19:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:58.030 Running I/O for 10 seconds... 00:17:00.343 4096.00 IOPS, 16.00 MiB/s [2024-12-05T19:36:54.716Z] 4160.00 IOPS, 16.25 MiB/s [2024-12-05T19:36:55.654Z] 4181.33 IOPS, 16.33 MiB/s [2024-12-05T19:36:56.589Z] 4207.75 IOPS, 16.44 MiB/s [2024-12-05T19:36:57.526Z] 4246.00 IOPS, 16.59 MiB/s [2024-12-05T19:36:58.463Z] 4274.50 IOPS, 16.70 MiB/s [2024-12-05T19:36:59.866Z] 4296.00 IOPS, 16.78 MiB/s [2024-12-05T19:37:00.434Z] 4297.12 IOPS, 16.79 MiB/s [2024-12-05T19:37:01.816Z] 4295.56 IOPS, 16.78 MiB/s [2024-12-05T19:37:01.816Z] 4300.40 IOPS, 16.80 MiB/s 00:17:08.379 Latency(us) 00:17:08.379 [2024-12-05T19:37:01.816Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:08.379 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:08.379 Verification LBA range: start 0x0 length 0x2000 00:17:08.379 TLSTESTn1 : 10.01 4307.21 16.83 0.00 0.00 29666.72 4498.15 24069.59 00:17:08.379 [2024-12-05T19:37:01.816Z] =================================================================================================================== 00:17:08.379 [2024-12-05T19:37:01.816Z] Total : 4307.21 16.83 0.00 0.00 29666.72 4498.15 24069.59 00:17:08.379 { 00:17:08.379 "results": [ 00:17:08.379 { 00:17:08.379 "job": "TLSTESTn1", 00:17:08.379 "core_mask": "0x4", 00:17:08.379 "workload": "verify", 00:17:08.379 "status": "finished", 00:17:08.379 "verify_range": { 00:17:08.379 "start": 0, 00:17:08.379 "length": 8192 00:17:08.379 }, 00:17:08.379 "queue_depth": 128, 00:17:08.379 "io_size": 4096, 00:17:08.379 "runtime": 10.013896, 00:17:08.379 "iops": 4307.214694460577, 00:17:08.379 "mibps": 16.82505740023663, 00:17:08.379 "io_failed": 0, 00:17:08.379 "io_timeout": 0, 00:17:08.379 "avg_latency_us": 29666.724061612134, 00:17:08.379 "min_latency_us": 4498.152727272727, 00:17:08.379 "max_latency_us": 24069.585454545453 00:17:08.379 } 00:17:08.379 ], 00:17:08.379 "core_count": 1 00:17:08.379 } 00:17:08.379 19:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:17:08.379 19:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:17:08.379 19:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:17:08.379 19:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:17:08.380 19:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:17:08.380 19:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:08.380 19:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:17:08.380 19:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:17:08.380 19:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:17:08.380 19:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:08.380 nvmf_trace.0 00:17:08.380 19:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:17:08.380 19:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 72818 00:17:08.380 19:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 72818 ']' 00:17:08.380 19:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 72818 00:17:08.380 19:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:17:08.380 19:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:08.380 19:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72818 00:17:08.380 19:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:08.380 19:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:08.380 killing process with pid 72818 00:17:08.380 19:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72818' 00:17:08.380 19:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 72818 00:17:08.380 Received shutdown signal, test time was about 10.000000 seconds 00:17:08.380 00:17:08.380 Latency(us) 00:17:08.380 [2024-12-05T19:37:01.817Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:08.380 [2024-12-05T19:37:01.817Z] =================================================================================================================== 00:17:08.380 [2024-12-05T19:37:01.817Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:08.380 19:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 72818 00:17:08.380 19:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:17:08.380 19:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:08.380 19:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:17:08.380 19:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:08.380 19:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:17:08.380 19:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:08.380 19:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:08.380 rmmod nvme_tcp 00:17:08.639 rmmod nvme_fabrics 00:17:08.639 rmmod nvme_keyring 00:17:08.639 19:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:08.639 19:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:17:08.639 19:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:17:08.639 19:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 72772 ']' 00:17:08.639 19:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 72772 00:17:08.639 19:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 72772 ']' 00:17:08.639 19:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 72772 00:17:08.639 19:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:17:08.639 19:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:08.639 19:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72772 00:17:08.639 19:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:08.639 19:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:08.639 killing process with pid 72772 00:17:08.639 19:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72772' 00:17:08.639 19:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 72772 00:17:08.639 19:37:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 72772 00:17:08.899 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:08.899 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:08.899 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:08.899 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:17:08.899 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:17:08.899 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:08.899 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:17:08.899 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:08.899 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:08.899 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:08.899 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:08.899 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:08.899 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:08.899 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:08.899 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:08.899 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:08.899 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:08.899 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:08.899 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:08.899 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:08.899 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:08.899 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:08.900 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:08.900 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:08.900 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:08.900 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:09.160 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@300 -- # return 0 00:17:09.160 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.rBK 00:17:09.160 00:17:09.160 real 0m14.344s 00:17:09.160 user 0m19.509s 00:17:09.160 sys 0m5.678s 00:17:09.160 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:09.160 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:17:09.160 ************************************ 00:17:09.160 END TEST nvmf_fips 00:17:09.160 ************************************ 00:17:09.160 19:37:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:17:09.160 19:37:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:09.160 19:37:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:09.160 19:37:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:09.160 ************************************ 00:17:09.160 START TEST nvmf_control_msg_list 00:17:09.160 ************************************ 00:17:09.160 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:17:09.160 * Looking for test storage... 00:17:09.160 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:09.160 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:09.160 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lcov --version 00:17:09.160 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:09.161 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:09.161 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:09.161 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:09.161 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:09.161 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:17:09.161 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:17:09.161 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:17:09.161 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:17:09.161 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:17:09.161 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:17:09.161 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:17:09.161 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:09.161 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:17:09.161 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:17:09.161 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:09.161 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:09.161 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:17:09.161 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:17:09.161 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:09.161 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:17:09.161 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:17:09.161 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:17:09.161 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:17:09.161 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:09.161 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:17:09.161 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:17:09.161 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:09.161 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:09.161 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:17:09.161 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:09.161 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:09.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:09.161 --rc genhtml_branch_coverage=1 00:17:09.161 --rc genhtml_function_coverage=1 00:17:09.161 --rc genhtml_legend=1 00:17:09.161 --rc geninfo_all_blocks=1 00:17:09.161 --rc geninfo_unexecuted_blocks=1 00:17:09.161 00:17:09.161 ' 00:17:09.161 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:09.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:09.161 --rc genhtml_branch_coverage=1 00:17:09.161 --rc genhtml_function_coverage=1 00:17:09.161 --rc genhtml_legend=1 00:17:09.161 --rc geninfo_all_blocks=1 00:17:09.161 --rc geninfo_unexecuted_blocks=1 00:17:09.161 00:17:09.161 ' 00:17:09.161 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:09.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:09.161 --rc genhtml_branch_coverage=1 00:17:09.161 --rc genhtml_function_coverage=1 00:17:09.161 --rc genhtml_legend=1 00:17:09.161 --rc geninfo_all_blocks=1 00:17:09.161 --rc geninfo_unexecuted_blocks=1 00:17:09.161 00:17:09.161 ' 00:17:09.161 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:09.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:09.161 --rc genhtml_branch_coverage=1 00:17:09.161 --rc genhtml_function_coverage=1 00:17:09.161 --rc genhtml_legend=1 00:17:09.161 --rc geninfo_all_blocks=1 00:17:09.161 --rc geninfo_unexecuted_blocks=1 00:17:09.161 00:17:09.161 ' 00:17:09.161 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:09.161 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:17:09.161 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:09.161 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:09.161 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:09.161 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:09.161 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:09.161 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:09.161 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:09.161 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:09.161 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:09.161 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:09.161 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:17:09.161 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:17:09.161 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:09.161 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:09.161 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:09.161 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:09.161 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:09.161 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:17:09.161 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:09.161 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:09.161 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:09.161 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:09.161 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:09.161 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:09.161 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:17:09.162 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:09.162 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:17:09.162 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:09.162 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:09.162 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:09.162 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:09.162 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:09.162 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:09.162 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:09.162 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:09.162 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:09.162 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:09.162 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:17:09.162 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:09.162 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:09.162 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:09.162 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:09.162 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:09.162 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:09.162 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:09.162 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:09.162 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:09.162 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:09.162 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:09.162 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:09.162 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:09.162 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:09.162 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:09.162 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:09.162 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:09.162 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:09.162 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:09.162 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:09.162 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:09.162 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:09.162 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:09.162 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:09.162 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:09.162 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:09.162 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:09.162 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:09.162 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:09.162 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:09.162 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:09.421 Cannot find device "nvmf_init_br" 00:17:09.421 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # true 00:17:09.421 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:09.421 Cannot find device "nvmf_init_br2" 00:17:09.421 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # true 00:17:09.421 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:09.421 Cannot find device "nvmf_tgt_br" 00:17:09.421 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # true 00:17:09.421 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:09.421 Cannot find device "nvmf_tgt_br2" 00:17:09.421 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # true 00:17:09.421 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:09.421 Cannot find device "nvmf_init_br" 00:17:09.421 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # true 00:17:09.421 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:09.421 Cannot find device "nvmf_init_br2" 00:17:09.421 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # true 00:17:09.421 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:09.421 Cannot find device "nvmf_tgt_br" 00:17:09.421 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # true 00:17:09.421 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:09.421 Cannot find device "nvmf_tgt_br2" 00:17:09.421 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # true 00:17:09.421 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:09.421 Cannot find device "nvmf_br" 00:17:09.421 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # true 00:17:09.421 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:09.421 Cannot find device "nvmf_init_if" 00:17:09.422 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # true 00:17:09.422 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:09.422 Cannot find device "nvmf_init_if2" 00:17:09.422 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # true 00:17:09.422 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:09.422 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:09.422 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # true 00:17:09.422 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:09.422 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:09.422 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # true 00:17:09.422 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:09.422 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:09.422 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:09.422 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:09.422 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:09.422 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:09.422 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:09.422 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:09.422 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:09.422 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:09.422 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:09.422 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:09.422 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:09.422 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:09.422 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:09.422 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:09.422 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:09.422 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:09.422 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:09.422 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:09.422 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:09.681 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:09.681 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:09.681 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:09.681 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:09.681 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:09.681 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:09.681 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:09.681 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:09.681 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:09.681 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:09.681 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:09.681 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:09.681 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:09.681 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:17:09.681 00:17:09.681 --- 10.0.0.3 ping statistics --- 00:17:09.681 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:09.681 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:17:09.681 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:09.681 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:09.681 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.053 ms 00:17:09.681 00:17:09.681 --- 10.0.0.4 ping statistics --- 00:17:09.681 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:09.681 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:17:09.681 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:09.681 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:09.681 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:17:09.681 00:17:09.681 --- 10.0.0.1 ping statistics --- 00:17:09.681 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:09.681 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:17:09.681 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:09.681 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:09.681 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.044 ms 00:17:09.681 00:17:09.681 --- 10.0.0.2 ping statistics --- 00:17:09.681 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:09.681 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:17:09.681 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:09.681 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@461 -- # return 0 00:17:09.681 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:09.681 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:09.681 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:09.681 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:09.681 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:09.681 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:09.681 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:09.681 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:17:09.681 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:09.681 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:09.681 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:09.681 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=73192 00:17:09.681 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:17:09.681 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 73192 00:17:09.681 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 73192 ']' 00:17:09.681 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:09.681 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:09.681 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:09.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:09.681 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:09.681 19:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:09.681 [2024-12-05 19:37:03.027346] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:17:09.681 [2024-12-05 19:37:03.027985] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:09.939 [2024-12-05 19:37:03.183136] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:09.939 [2024-12-05 19:37:03.240207] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:09.939 [2024-12-05 19:37:03.240286] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:09.939 [2024-12-05 19:37:03.240312] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:09.939 [2024-12-05 19:37:03.240323] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:09.940 [2024-12-05 19:37:03.240331] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:09.940 [2024-12-05 19:37:03.240800] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:09.940 [2024-12-05 19:37:03.300794] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:10.879 19:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:10.879 19:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:17:10.879 19:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:10.879 19:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:10.879 19:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:10.879 19:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:10.879 19:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:17:10.879 19:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:17:10.879 19:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:17:10.879 19:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.879 19:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:10.879 [2024-12-05 19:37:04.085454] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:10.879 19:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.879 19:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:17:10.879 19:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.879 19:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:10.879 19:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.879 19:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:17:10.879 19:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.879 19:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:10.879 Malloc0 00:17:10.879 19:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.879 19:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:17:10.879 19:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.879 19:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:10.879 19:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.879 19:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:17:10.879 19:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.879 19:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:10.879 [2024-12-05 19:37:04.133326] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:10.879 19:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.879 19:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=73224 00:17:10.879 19:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:17:10.879 19:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=73225 00:17:10.879 19:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:17:10.879 19:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=73226 00:17:10.879 19:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:17:10.879 19:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 73224 00:17:11.138 [2024-12-05 19:37:04.323676] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:17:11.138 [2024-12-05 19:37:04.343759] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:17:11.138 [2024-12-05 19:37:04.353839] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:17:12.089 Initializing NVMe Controllers 00:17:12.089 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:17:12.089 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:17:12.089 Initialization complete. Launching workers. 00:17:12.089 ======================================================== 00:17:12.089 Latency(us) 00:17:12.089 Device Information : IOPS MiB/s Average min max 00:17:12.089 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 3498.00 13.66 285.57 126.34 655.36 00:17:12.089 ======================================================== 00:17:12.089 Total : 3498.00 13.66 285.57 126.34 655.36 00:17:12.089 00:17:12.089 19:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 73225 00:17:12.089 Initializing NVMe Controllers 00:17:12.089 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:17:12.089 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:17:12.089 Initialization complete. Launching workers. 00:17:12.089 ======================================================== 00:17:12.089 Latency(us) 00:17:12.089 Device Information : IOPS MiB/s Average min max 00:17:12.089 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 3469.00 13.55 287.86 174.65 656.95 00:17:12.089 ======================================================== 00:17:12.089 Total : 3469.00 13.55 287.86 174.65 656.95 00:17:12.089 00:17:12.089 Initializing NVMe Controllers 00:17:12.089 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:17:12.089 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:17:12.089 Initialization complete. Launching workers. 00:17:12.089 ======================================================== 00:17:12.089 Latency(us) 00:17:12.089 Device Information : IOPS MiB/s Average min max 00:17:12.089 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 3511.95 13.72 284.38 110.29 449.21 00:17:12.089 ======================================================== 00:17:12.089 Total : 3511.95 13.72 284.38 110.29 449.21 00:17:12.089 00:17:12.089 19:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 73226 00:17:12.089 19:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:17:12.089 19:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:17:12.089 19:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:12.089 19:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:17:12.089 19:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:12.089 19:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:17:12.089 19:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:12.089 19:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:12.089 rmmod nvme_tcp 00:17:12.089 rmmod nvme_fabrics 00:17:12.089 rmmod nvme_keyring 00:17:12.089 19:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:12.089 19:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:17:12.089 19:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:17:12.089 19:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 73192 ']' 00:17:12.089 19:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 73192 00:17:12.089 19:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 73192 ']' 00:17:12.089 19:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 73192 00:17:12.089 19:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:17:12.089 19:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:12.089 19:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73192 00:17:12.090 19:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:12.090 19:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:12.090 killing process with pid 73192 00:17:12.090 19:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73192' 00:17:12.090 19:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 73192 00:17:12.090 19:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 73192 00:17:12.374 19:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:12.374 19:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:12.374 19:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:12.374 19:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:17:12.374 19:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:17:12.374 19:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:12.374 19:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:17:12.374 19:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:12.374 19:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:12.374 19:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:12.374 19:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:12.374 19:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:12.374 19:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:12.374 19:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:12.374 19:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:12.374 19:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:12.374 19:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:12.632 19:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:12.632 19:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:12.632 19:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:12.632 19:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:12.632 19:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:12.632 19:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:12.632 19:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:12.632 19:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:12.632 19:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:12.632 19:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@300 -- # return 0 00:17:12.632 00:17:12.632 real 0m3.572s 00:17:12.632 user 0m5.692s 00:17:12.632 sys 0m1.403s 00:17:12.632 19:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:12.632 19:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:12.632 ************************************ 00:17:12.632 END TEST nvmf_control_msg_list 00:17:12.632 ************************************ 00:17:12.632 19:37:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:17:12.632 19:37:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:12.632 19:37:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:12.632 19:37:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:12.632 ************************************ 00:17:12.632 START TEST nvmf_wait_for_buf 00:17:12.632 ************************************ 00:17:12.632 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:17:12.891 * Looking for test storage... 00:17:12.891 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:12.891 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:12.891 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lcov --version 00:17:12.891 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:12.891 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:12.891 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:12.891 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:12.891 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:12.891 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:17:12.891 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:17:12.891 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:17:12.891 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:17:12.891 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:17:12.891 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:17:12.891 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:17:12.891 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:12.891 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:17:12.891 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:17:12.891 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:12.891 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:12.891 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:17:12.891 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:17:12.891 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:12.891 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:17:12.891 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:17:12.891 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:17:12.891 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:17:12.891 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:12.891 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:17:12.891 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:17:12.891 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:12.891 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:12.891 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:17:12.891 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:12.891 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:12.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:12.891 --rc genhtml_branch_coverage=1 00:17:12.891 --rc genhtml_function_coverage=1 00:17:12.891 --rc genhtml_legend=1 00:17:12.891 --rc geninfo_all_blocks=1 00:17:12.891 --rc geninfo_unexecuted_blocks=1 00:17:12.891 00:17:12.891 ' 00:17:12.891 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:12.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:12.891 --rc genhtml_branch_coverage=1 00:17:12.891 --rc genhtml_function_coverage=1 00:17:12.891 --rc genhtml_legend=1 00:17:12.891 --rc geninfo_all_blocks=1 00:17:12.891 --rc geninfo_unexecuted_blocks=1 00:17:12.891 00:17:12.891 ' 00:17:12.891 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:12.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:12.891 --rc genhtml_branch_coverage=1 00:17:12.891 --rc genhtml_function_coverage=1 00:17:12.891 --rc genhtml_legend=1 00:17:12.891 --rc geninfo_all_blocks=1 00:17:12.891 --rc geninfo_unexecuted_blocks=1 00:17:12.891 00:17:12.891 ' 00:17:12.891 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:12.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:12.891 --rc genhtml_branch_coverage=1 00:17:12.891 --rc genhtml_function_coverage=1 00:17:12.891 --rc genhtml_legend=1 00:17:12.891 --rc geninfo_all_blocks=1 00:17:12.891 --rc geninfo_unexecuted_blocks=1 00:17:12.891 00:17:12.891 ' 00:17:12.891 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:12.891 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:17:12.891 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:12.891 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:12.891 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:12.891 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:12.891 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:12.891 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:12.891 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:12.891 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:12.891 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:12.891 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:12.891 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:17:12.892 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:17:12.892 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:12.892 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:12.892 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:12.892 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:12.892 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:12.892 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:17:12.892 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:12.892 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:12.892 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:12.892 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:12.892 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:12.892 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:12.892 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:17:12.892 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:12.892 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:17:12.892 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:12.892 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:12.892 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:12.892 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:12.892 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:12.892 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:12.892 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:12.892 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:12.892 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:12.892 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:12.892 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:17:12.892 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:12.892 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:12.892 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:12.892 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:12.892 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:12.892 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:12.892 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:12.892 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:12.892 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:12.892 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:12.892 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:12.892 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:12.892 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:12.892 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:12.892 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:12.892 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:12.892 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:12.892 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:12.892 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:12.892 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:12.892 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:12.892 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:12.892 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:12.892 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:12.892 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:12.892 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:12.892 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:12.892 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:12.892 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:12.892 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:12.892 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:12.892 Cannot find device "nvmf_init_br" 00:17:12.892 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # true 00:17:12.892 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:12.892 Cannot find device "nvmf_init_br2" 00:17:12.892 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # true 00:17:12.892 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:12.892 Cannot find device "nvmf_tgt_br" 00:17:12.892 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # true 00:17:12.892 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:12.892 Cannot find device "nvmf_tgt_br2" 00:17:12.892 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # true 00:17:12.892 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:12.892 Cannot find device "nvmf_init_br" 00:17:12.892 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # true 00:17:12.892 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:12.892 Cannot find device "nvmf_init_br2" 00:17:12.892 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # true 00:17:12.892 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:12.892 Cannot find device "nvmf_tgt_br" 00:17:12.892 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # true 00:17:12.892 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:12.892 Cannot find device "nvmf_tgt_br2" 00:17:12.892 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # true 00:17:12.892 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:13.150 Cannot find device "nvmf_br" 00:17:13.150 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # true 00:17:13.150 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:13.150 Cannot find device "nvmf_init_if" 00:17:13.150 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # true 00:17:13.150 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:13.150 Cannot find device "nvmf_init_if2" 00:17:13.150 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # true 00:17:13.150 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:13.150 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:13.150 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # true 00:17:13.150 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:13.150 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:13.150 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # true 00:17:13.150 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:13.150 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:13.150 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:13.150 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:13.150 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:13.150 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:13.150 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:13.150 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:13.150 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:13.150 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:13.150 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:13.150 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:13.150 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:13.150 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:13.150 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:13.150 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:13.150 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:13.150 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:13.150 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:13.150 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:13.151 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:13.151 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:13.151 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:13.151 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:13.151 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:13.151 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:13.151 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:13.151 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:13.409 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:13.409 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:13.409 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:13.409 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:13.409 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:13.409 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:13.409 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.112 ms 00:17:13.409 00:17:13.409 --- 10.0.0.3 ping statistics --- 00:17:13.409 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:13.409 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:17:13.409 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:13.409 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:13.409 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:17:13.409 00:17:13.409 --- 10.0.0.4 ping statistics --- 00:17:13.409 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:13.409 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:17:13.409 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:13.409 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:13.409 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:17:13.409 00:17:13.409 --- 10.0.0.1 ping statistics --- 00:17:13.409 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:13.409 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:17:13.409 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:13.409 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:13.409 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.053 ms 00:17:13.409 00:17:13.409 --- 10.0.0.2 ping statistics --- 00:17:13.409 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:13.409 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:17:13.409 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:13.409 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@461 -- # return 0 00:17:13.409 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:13.409 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:13.409 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:13.409 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:13.409 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:13.409 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:13.409 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:13.409 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:17:13.409 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:13.409 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:13.409 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:13.409 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=73465 00:17:13.409 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:17:13.409 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 73465 00:17:13.409 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 73465 ']' 00:17:13.409 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:13.409 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:13.409 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:13.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:13.409 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:13.409 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:13.409 [2024-12-05 19:37:06.709543] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:17:13.409 [2024-12-05 19:37:06.709657] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:13.667 [2024-12-05 19:37:06.864772] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:13.668 [2024-12-05 19:37:06.931245] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:13.668 [2024-12-05 19:37:06.931332] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:13.668 [2024-12-05 19:37:06.931359] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:13.668 [2024-12-05 19:37:06.931374] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:13.668 [2024-12-05 19:37:06.931383] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:13.668 [2024-12-05 19:37:06.931935] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:13.668 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:13.668 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:17:13.668 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:13.668 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:13.668 19:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:13.668 19:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:13.668 19:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:17:13.668 19:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:17:13.668 19:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:17:13.668 19:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.668 19:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:13.668 19:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.668 19:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:17:13.668 19:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.668 19:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:13.668 19:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.668 19:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:17:13.668 19:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.668 19:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:13.668 [2024-12-05 19:37:07.078321] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:13.926 19:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.926 19:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:17:13.926 19:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.926 19:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:13.926 Malloc0 00:17:13.926 19:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.926 19:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:17:13.926 19:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.926 19:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:13.926 [2024-12-05 19:37:07.144054] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:13.927 19:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.927 19:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:17:13.927 19:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.927 19:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:13.927 19:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.927 19:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:17:13.927 19:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.927 19:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:13.927 19:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.927 19:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:17:13.927 19:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.927 19:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:13.927 [2024-12-05 19:37:07.168112] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:13.927 19:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.927 19:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:17:14.185 [2024-12-05 19:37:07.367254] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:17:15.564 Initializing NVMe Controllers 00:17:15.564 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:17:15.564 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:17:15.564 Initialization complete. Launching workers. 00:17:15.564 ======================================================== 00:17:15.564 Latency(us) 00:17:15.564 Device Information : IOPS MiB/s Average min max 00:17:15.564 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 498.06 62.26 8031.00 7136.09 11013.52 00:17:15.564 ======================================================== 00:17:15.564 Total : 498.06 62.26 8031.00 7136.09 11013.52 00:17:15.564 00:17:15.564 19:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:17:15.564 19:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:17:15.564 19:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.565 19:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:15.565 19:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.565 19:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=4750 00:17:15.565 19:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 4750 -eq 0 ]] 00:17:15.565 19:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:17:15.565 19:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:17:15.565 19:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:15.565 19:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:17:15.565 19:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:15.565 19:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:17:15.565 19:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:15.565 19:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:15.565 rmmod nvme_tcp 00:17:15.565 rmmod nvme_fabrics 00:17:15.565 rmmod nvme_keyring 00:17:15.565 19:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:15.565 19:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:17:15.565 19:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:17:15.565 19:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 73465 ']' 00:17:15.565 19:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 73465 00:17:15.565 19:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 73465 ']' 00:17:15.565 19:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 73465 00:17:15.565 19:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:17:15.565 19:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:15.565 19:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73465 00:17:15.565 19:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:15.565 19:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:15.565 killing process with pid 73465 00:17:15.565 19:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73465' 00:17:15.565 19:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 73465 00:17:15.565 19:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 73465 00:17:15.824 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:15.824 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:15.824 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:15.824 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:17:15.824 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:17:15.824 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:15.824 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:17:15.824 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:15.824 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:15.824 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:15.824 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:15.824 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:15.824 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:15.824 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:15.824 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:15.824 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:15.824 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:15.824 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:15.824 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:15.824 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:15.824 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:15.824 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:15.824 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:15.824 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:15.824 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:15.824 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:15.824 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@300 -- # return 0 00:17:15.824 00:17:15.824 real 0m3.234s 00:17:15.824 user 0m2.576s 00:17:15.824 sys 0m0.831s 00:17:15.824 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:15.824 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:15.824 ************************************ 00:17:15.824 END TEST nvmf_wait_for_buf 00:17:15.824 ************************************ 00:17:16.084 19:37:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:17:16.084 19:37:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ virt == phy ]] 00:17:16.084 19:37:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:17:16.084 19:37:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:16.084 19:37:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:16.084 19:37:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:16.084 ************************************ 00:17:16.084 START TEST nvmf_nsid 00:17:16.084 ************************************ 00:17:16.084 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:17:16.084 * Looking for test storage... 00:17:16.084 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:16.084 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:16.084 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:16.084 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lcov --version 00:17:16.084 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:16.084 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:16.084 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:16.084 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:16.084 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:17:16.084 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:17:16.084 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:17:16.084 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:17:16.084 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:17:16.084 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:17:16.084 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:17:16.084 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:16.084 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:17:16.084 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:17:16.084 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:16.084 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:16.084 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:17:16.084 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:17:16.084 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:16.084 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:17:16.084 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:17:16.084 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:17:16.084 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:17:16.084 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:16.084 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:17:16.084 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:17:16.084 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:16.084 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:16.084 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:17:16.084 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:16.084 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:16.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:16.084 --rc genhtml_branch_coverage=1 00:17:16.084 --rc genhtml_function_coverage=1 00:17:16.084 --rc genhtml_legend=1 00:17:16.084 --rc geninfo_all_blocks=1 00:17:16.084 --rc geninfo_unexecuted_blocks=1 00:17:16.084 00:17:16.084 ' 00:17:16.084 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:16.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:16.084 --rc genhtml_branch_coverage=1 00:17:16.084 --rc genhtml_function_coverage=1 00:17:16.084 --rc genhtml_legend=1 00:17:16.084 --rc geninfo_all_blocks=1 00:17:16.084 --rc geninfo_unexecuted_blocks=1 00:17:16.084 00:17:16.084 ' 00:17:16.084 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:16.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:16.084 --rc genhtml_branch_coverage=1 00:17:16.084 --rc genhtml_function_coverage=1 00:17:16.084 --rc genhtml_legend=1 00:17:16.084 --rc geninfo_all_blocks=1 00:17:16.084 --rc geninfo_unexecuted_blocks=1 00:17:16.084 00:17:16.084 ' 00:17:16.084 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:16.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:16.084 --rc genhtml_branch_coverage=1 00:17:16.084 --rc genhtml_function_coverage=1 00:17:16.084 --rc genhtml_legend=1 00:17:16.084 --rc geninfo_all_blocks=1 00:17:16.084 --rc geninfo_unexecuted_blocks=1 00:17:16.084 00:17:16.084 ' 00:17:16.084 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:16.084 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:17:16.084 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:16.084 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:16.084 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:16.084 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:16.084 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:16.084 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:16.085 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:16.085 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:16.085 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:16.085 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:16.085 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:17:16.085 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:17:16.085 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:16.085 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:16.085 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:16.085 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:16.085 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:16.085 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:17:16.085 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:16.085 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:16.085 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:16.085 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:16.085 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:16.085 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:16.085 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:17:16.085 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:16.085 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:17:16.085 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:16.085 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:16.085 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:16.085 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:16.085 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:16.085 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:16.085 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:16.085 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:16.085 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:16.085 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:16.345 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:17:16.345 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:17:16.345 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:17:16.345 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:17:16.345 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:17:16.345 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:17:16.345 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:16.345 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:16.345 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:16.345 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:16.345 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:16.345 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:16.345 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:16.345 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:16.345 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:16.345 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:16.345 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:16.345 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:16.345 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:16.345 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:16.345 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:16.345 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:16.345 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:16.345 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:16.345 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:16.345 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:16.345 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:16.345 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:16.345 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:16.345 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:16.345 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:16.345 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:16.345 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:16.345 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:16.345 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:16.345 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:16.345 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:16.345 Cannot find device "nvmf_init_br" 00:17:16.345 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # true 00:17:16.346 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:16.346 Cannot find device "nvmf_init_br2" 00:17:16.346 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # true 00:17:16.346 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:16.346 Cannot find device "nvmf_tgt_br" 00:17:16.346 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # true 00:17:16.346 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:16.346 Cannot find device "nvmf_tgt_br2" 00:17:16.346 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # true 00:17:16.346 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:16.346 Cannot find device "nvmf_init_br" 00:17:16.346 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # true 00:17:16.346 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:16.346 Cannot find device "nvmf_init_br2" 00:17:16.346 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # true 00:17:16.346 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:16.346 Cannot find device "nvmf_tgt_br" 00:17:16.346 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # true 00:17:16.346 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:16.346 Cannot find device "nvmf_tgt_br2" 00:17:16.346 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # true 00:17:16.346 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:16.346 Cannot find device "nvmf_br" 00:17:16.346 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # true 00:17:16.346 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:16.346 Cannot find device "nvmf_init_if" 00:17:16.346 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # true 00:17:16.346 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:16.346 Cannot find device "nvmf_init_if2" 00:17:16.346 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # true 00:17:16.346 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:16.346 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:16.346 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # true 00:17:16.346 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:16.346 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:16.346 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # true 00:17:16.346 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:16.346 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:16.346 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:16.346 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:16.346 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:16.346 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:16.346 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:16.346 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:16.346 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:16.346 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:16.346 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:16.346 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:16.346 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:16.346 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:16.346 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:16.346 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:16.346 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:16.346 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:16.346 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:16.346 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:16.606 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:16.606 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:16.606 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:16.606 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:16.606 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:16.606 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:16.606 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:16.606 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:16.606 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:16.606 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:16.606 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:16.606 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:16.606 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:16.606 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:16.606 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:17:16.606 00:17:16.606 --- 10.0.0.3 ping statistics --- 00:17:16.606 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:16.606 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:17:16.606 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:16.606 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:16.606 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.053 ms 00:17:16.606 00:17:16.606 --- 10.0.0.4 ping statistics --- 00:17:16.606 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:16.606 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:17:16.606 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:16.606 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:16.606 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:17:16.606 00:17:16.606 --- 10.0.0.1 ping statistics --- 00:17:16.606 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:16.606 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:17:16.606 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:16.606 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:16.606 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.046 ms 00:17:16.606 00:17:16.606 --- 10.0.0.2 ping statistics --- 00:17:16.606 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:16.606 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:17:16.606 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:16.606 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@461 -- # return 0 00:17:16.607 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:16.607 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:16.607 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:16.607 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:16.607 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:16.607 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:16.607 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:16.607 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:17:16.607 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:16.607 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:16.607 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:17:16.607 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=73725 00:17:16.607 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:17:16.607 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 73725 00:17:16.607 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 73725 ']' 00:17:16.607 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:16.607 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:16.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:16.607 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:16.607 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:16.607 19:37:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:17:16.607 [2024-12-05 19:37:09.965625] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:17:16.607 [2024-12-05 19:37:09.965729] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:16.867 [2024-12-05 19:37:10.113975] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:16.867 [2024-12-05 19:37:10.171046] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:16.867 [2024-12-05 19:37:10.171126] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:16.867 [2024-12-05 19:37:10.171139] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:16.867 [2024-12-05 19:37:10.171147] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:16.867 [2024-12-05 19:37:10.171154] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:16.867 [2024-12-05 19:37:10.171592] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:16.867 [2024-12-05 19:37:10.231334] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:17.127 19:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:17.127 19:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:17:17.127 19:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:17.127 19:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:17.127 19:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:17:17.127 19:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:17.127 19:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:17:17.127 19:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=73745 00:17:17.127 19:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.3 00:17:17.127 19:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:17:17.127 19:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:17:17.127 19:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:17:17.127 19:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:17.127 19:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:17.127 19:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:17.127 19:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:17.127 19:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:17.127 19:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:17.127 19:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:17.127 19:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:17.127 19:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:17.127 19:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:17:17.127 19:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:17:17.127 19:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=af6e1e07-8b38-487b-8e5d-5506ab209f01 00:17:17.127 19:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:17:17.127 19:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=7f596c9f-8291-46f9-a0c9-d621f3c6bc88 00:17:17.127 19:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:17:17.127 19:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=4de19914-c917-40f9-abc5-e6b8a5939850 00:17:17.127 19:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:17:17.127 19:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.127 19:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:17:17.127 null0 00:17:17.127 null1 00:17:17.127 null2 00:17:17.127 [2024-12-05 19:37:10.406205] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:17.127 [2024-12-05 19:37:10.430355] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:17.127 [2024-12-05 19:37:10.447784] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:17:17.127 [2024-12-05 19:37:10.447939] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73745 ] 00:17:17.127 19:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.127 19:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 73745 /var/tmp/tgt2.sock 00:17:17.127 19:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 73745 ']' 00:17:17.127 19:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:17:17.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:17:17.127 19:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:17.127 19:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:17:17.127 19:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:17.127 19:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:17:17.387 [2024-12-05 19:37:10.610161] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:17.387 [2024-12-05 19:37:10.676163] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:17.387 [2024-12-05 19:37:10.751477] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:17.646 19:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:17.646 19:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:17:17.646 19:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:17:18.230 [2024-12-05 19:37:11.373993] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:18.230 [2024-12-05 19:37:11.390165] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:17:18.230 nvme0n1 nvme0n2 00:17:18.230 nvme1n1 00:17:18.230 19:37:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:17:18.230 19:37:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:17:18.230 19:37:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --hostid=bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:17:18.230 19:37:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:17:18.230 19:37:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:17:18.230 19:37:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:17:18.230 19:37:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:17:18.230 19:37:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:17:18.230 19:37:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:17:18.230 19:37:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:17:18.230 19:37:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:17:18.230 19:37:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:18.230 19:37:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:17:18.230 19:37:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:17:18.230 19:37:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:17:18.230 19:37:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:17:19.168 19:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:19.168 19:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:17:19.168 19:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:17:19.168 19:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:17:19.427 19:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:17:19.427 19:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid af6e1e07-8b38-487b-8e5d-5506ab209f01 00:17:19.427 19:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:17:19.427 19:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:17:19.427 19:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:17:19.427 19:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:17:19.427 19:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:17:19.427 19:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=af6e1e078b38487b8e5d5506ab209f01 00:17:19.427 19:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo AF6E1E078B38487B8E5D5506AB209F01 00:17:19.427 19:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ AF6E1E078B38487B8E5D5506AB209F01 == \A\F\6\E\1\E\0\7\8\B\3\8\4\8\7\B\8\E\5\D\5\5\0\6\A\B\2\0\9\F\0\1 ]] 00:17:19.427 19:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:17:19.427 19:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:17:19.427 19:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:19.427 19:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:17:19.427 19:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:17:19.427 19:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:17:19.427 19:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:17:19.427 19:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 7f596c9f-8291-46f9-a0c9-d621f3c6bc88 00:17:19.427 19:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:17:19.427 19:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:17:19.427 19:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:17:19.427 19:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:17:19.427 19:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:17:19.427 19:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=7f596c9f829146f9a0c9d621f3c6bc88 00:17:19.428 19:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 7F596C9F829146F9A0C9D621F3C6BC88 00:17:19.428 19:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 7F596C9F829146F9A0C9D621F3C6BC88 == \7\F\5\9\6\C\9\F\8\2\9\1\4\6\F\9\A\0\C\9\D\6\2\1\F\3\C\6\B\C\8\8 ]] 00:17:19.428 19:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:17:19.428 19:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:17:19.428 19:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:19.428 19:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:17:19.428 19:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:17:19.428 19:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:17:19.428 19:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:17:19.428 19:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 4de19914-c917-40f9-abc5-e6b8a5939850 00:17:19.428 19:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:17:19.428 19:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:17:19.428 19:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:17:19.428 19:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:17:19.428 19:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:17:19.428 19:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=4de19914c91740f9abc5e6b8a5939850 00:17:19.428 19:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 4DE19914C91740F9ABC5E6B8A5939850 00:17:19.428 19:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 4DE19914C91740F9ABC5E6B8A5939850 == \4\D\E\1\9\9\1\4\C\9\1\7\4\0\F\9\A\B\C\5\E\6\B\8\A\5\9\3\9\8\5\0 ]] 00:17:19.428 19:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:17:19.687 19:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:17:19.687 19:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:17:19.687 19:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 73745 00:17:19.687 19:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 73745 ']' 00:17:19.687 19:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 73745 00:17:19.687 19:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:17:19.687 19:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:19.687 19:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73745 00:17:19.687 19:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:19.687 killing process with pid 73745 00:17:19.687 19:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:19.687 19:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73745' 00:17:19.687 19:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 73745 00:17:19.687 19:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 73745 00:17:20.254 19:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:17:20.254 19:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:20.254 19:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:17:20.254 19:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:20.254 19:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:17:20.254 19:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:20.254 19:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:20.254 rmmod nvme_tcp 00:17:20.254 rmmod nvme_fabrics 00:17:20.254 rmmod nvme_keyring 00:17:20.254 19:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:20.254 19:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:17:20.254 19:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:17:20.254 19:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 73725 ']' 00:17:20.254 19:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 73725 00:17:20.254 19:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 73725 ']' 00:17:20.254 19:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 73725 00:17:20.254 19:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:17:20.254 19:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:20.254 19:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73725 00:17:20.254 19:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:20.254 19:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:20.254 killing process with pid 73725 00:17:20.254 19:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73725' 00:17:20.254 19:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 73725 00:17:20.254 19:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 73725 00:17:20.513 19:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:20.513 19:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:20.513 19:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:20.513 19:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:17:20.513 19:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:17:20.513 19:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:20.513 19:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:17:20.513 19:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:20.513 19:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:20.513 19:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:20.513 19:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:20.513 19:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:20.513 19:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:20.513 19:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:20.513 19:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:20.513 19:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:20.513 19:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:20.513 19:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:20.513 19:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:20.513 19:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:20.513 19:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:20.513 19:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:20.772 19:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:20.772 19:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:20.772 19:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:20.772 19:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:20.772 19:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@300 -- # return 0 00:17:20.772 00:17:20.772 real 0m4.680s 00:17:20.772 user 0m7.076s 00:17:20.772 sys 0m1.672s 00:17:20.772 19:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:20.772 19:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:17:20.772 ************************************ 00:17:20.772 END TEST nvmf_nsid 00:17:20.772 ************************************ 00:17:20.772 19:37:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:17:20.772 00:17:20.772 real 5m4.507s 00:17:20.772 user 10m37.902s 00:17:20.772 sys 1m7.827s 00:17:20.772 19:37:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:20.772 19:37:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:20.772 ************************************ 00:17:20.772 END TEST nvmf_target_extra 00:17:20.772 ************************************ 00:17:20.772 19:37:14 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:17:20.772 19:37:14 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:20.772 19:37:14 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:20.772 19:37:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:20.772 ************************************ 00:17:20.772 START TEST nvmf_host 00:17:20.772 ************************************ 00:17:20.772 19:37:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:17:20.772 * Looking for test storage... 00:17:20.772 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:17:20.772 19:37:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:20.772 19:37:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lcov --version 00:17:20.772 19:37:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:21.050 19:37:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:21.051 19:37:14 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:21.051 19:37:14 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:21.051 19:37:14 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:21.051 19:37:14 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:17:21.051 19:37:14 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:17:21.051 19:37:14 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:17:21.051 19:37:14 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:17:21.051 19:37:14 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:17:21.051 19:37:14 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:17:21.051 19:37:14 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:17:21.051 19:37:14 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:21.051 19:37:14 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:17:21.051 19:37:14 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:17:21.051 19:37:14 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:21.051 19:37:14 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:21.051 19:37:14 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:17:21.051 19:37:14 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:17:21.051 19:37:14 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:21.051 19:37:14 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:17:21.051 19:37:14 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:17:21.051 19:37:14 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:17:21.051 19:37:14 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:17:21.051 19:37:14 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:21.051 19:37:14 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:17:21.051 19:37:14 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:17:21.051 19:37:14 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:21.051 19:37:14 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:21.051 19:37:14 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:17:21.051 19:37:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:21.051 19:37:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:21.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:21.051 --rc genhtml_branch_coverage=1 00:17:21.051 --rc genhtml_function_coverage=1 00:17:21.051 --rc genhtml_legend=1 00:17:21.051 --rc geninfo_all_blocks=1 00:17:21.051 --rc geninfo_unexecuted_blocks=1 00:17:21.051 00:17:21.051 ' 00:17:21.051 19:37:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:21.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:21.051 --rc genhtml_branch_coverage=1 00:17:21.051 --rc genhtml_function_coverage=1 00:17:21.051 --rc genhtml_legend=1 00:17:21.051 --rc geninfo_all_blocks=1 00:17:21.051 --rc geninfo_unexecuted_blocks=1 00:17:21.051 00:17:21.051 ' 00:17:21.051 19:37:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:21.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:21.051 --rc genhtml_branch_coverage=1 00:17:21.051 --rc genhtml_function_coverage=1 00:17:21.051 --rc genhtml_legend=1 00:17:21.051 --rc geninfo_all_blocks=1 00:17:21.051 --rc geninfo_unexecuted_blocks=1 00:17:21.051 00:17:21.051 ' 00:17:21.051 19:37:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:21.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:21.051 --rc genhtml_branch_coverage=1 00:17:21.051 --rc genhtml_function_coverage=1 00:17:21.051 --rc genhtml_legend=1 00:17:21.051 --rc geninfo_all_blocks=1 00:17:21.051 --rc geninfo_unexecuted_blocks=1 00:17:21.051 00:17:21.051 ' 00:17:21.051 19:37:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:21.051 19:37:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:17:21.051 19:37:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:21.051 19:37:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:21.051 19:37:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:21.051 19:37:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:21.051 19:37:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:21.051 19:37:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:21.051 19:37:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:21.051 19:37:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:21.051 19:37:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:21.051 19:37:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:21.051 19:37:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:17:21.051 19:37:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:17:21.051 19:37:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:21.051 19:37:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:21.051 19:37:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:21.051 19:37:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:21.051 19:37:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:21.051 19:37:14 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:17:21.051 19:37:14 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:21.051 19:37:14 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:21.051 19:37:14 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:21.051 19:37:14 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:21.051 19:37:14 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:21.051 19:37:14 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:21.051 19:37:14 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:17:21.051 19:37:14 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:21.051 19:37:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:17:21.051 19:37:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:21.051 19:37:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:21.051 19:37:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:21.051 19:37:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:21.051 19:37:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:21.051 19:37:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:21.051 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:21.051 19:37:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:21.051 19:37:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:21.051 19:37:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:21.051 19:37:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:17:21.051 19:37:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:17:21.051 19:37:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 1 -eq 0 ]] 00:17:21.051 19:37:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:17:21.051 19:37:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:21.051 19:37:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:21.051 19:37:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.051 ************************************ 00:17:21.051 START TEST nvmf_identify 00:17:21.051 ************************************ 00:17:21.051 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:17:21.051 * Looking for test storage... 00:17:21.051 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:21.051 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:21.051 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lcov --version 00:17:21.051 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:21.051 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:21.051 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:21.051 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:21.051 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:21.051 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:17:21.051 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:17:21.051 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:17:21.051 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:17:21.052 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:17:21.052 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:17:21.052 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:17:21.052 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:21.052 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:17:21.052 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:17:21.052 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:21.052 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:21.052 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:17:21.052 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:17:21.052 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:21.052 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:17:21.052 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:17:21.052 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:17:21.052 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:17:21.052 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:21.052 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:17:21.052 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:17:21.052 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:21.052 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:21.052 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:17:21.052 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:21.052 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:21.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:21.052 --rc genhtml_branch_coverage=1 00:17:21.052 --rc genhtml_function_coverage=1 00:17:21.052 --rc genhtml_legend=1 00:17:21.052 --rc geninfo_all_blocks=1 00:17:21.052 --rc geninfo_unexecuted_blocks=1 00:17:21.052 00:17:21.052 ' 00:17:21.052 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:21.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:21.052 --rc genhtml_branch_coverage=1 00:17:21.052 --rc genhtml_function_coverage=1 00:17:21.052 --rc genhtml_legend=1 00:17:21.052 --rc geninfo_all_blocks=1 00:17:21.052 --rc geninfo_unexecuted_blocks=1 00:17:21.052 00:17:21.052 ' 00:17:21.052 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:21.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:21.052 --rc genhtml_branch_coverage=1 00:17:21.052 --rc genhtml_function_coverage=1 00:17:21.052 --rc genhtml_legend=1 00:17:21.052 --rc geninfo_all_blocks=1 00:17:21.052 --rc geninfo_unexecuted_blocks=1 00:17:21.052 00:17:21.052 ' 00:17:21.052 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:21.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:21.052 --rc genhtml_branch_coverage=1 00:17:21.052 --rc genhtml_function_coverage=1 00:17:21.052 --rc genhtml_legend=1 00:17:21.052 --rc geninfo_all_blocks=1 00:17:21.052 --rc geninfo_unexecuted_blocks=1 00:17:21.052 00:17:21.052 ' 00:17:21.052 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:21.052 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:17:21.311 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:21.312 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:21.312 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:21.312 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:21.312 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:21.312 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:21.312 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:21.312 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:21.312 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:21.312 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:21.312 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:17:21.312 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:17:21.312 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:21.312 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:21.312 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:21.312 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:21.312 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:21.312 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:17:21.312 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:21.312 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:21.312 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:21.312 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:21.312 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:21.312 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:21.312 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:17:21.312 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:21.312 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:17:21.312 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:21.312 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:21.312 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:21.312 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:21.312 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:21.312 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:21.312 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:21.312 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:21.312 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:21.312 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:21.312 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:21.312 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:21.312 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:17:21.312 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:21.312 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:21.312 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:21.312 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:21.312 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:21.312 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:21.312 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:21.312 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:21.312 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:21.312 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:21.312 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:21.312 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:21.312 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:21.312 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:21.312 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:21.312 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:21.312 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:21.312 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:21.312 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:21.312 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:21.312 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:21.312 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:21.312 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:21.312 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:21.312 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:21.312 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:21.312 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:21.312 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:21.312 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:21.312 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:21.312 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:21.312 Cannot find device "nvmf_init_br" 00:17:21.312 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # true 00:17:21.312 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:21.312 Cannot find device "nvmf_init_br2" 00:17:21.312 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # true 00:17:21.312 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:21.312 Cannot find device "nvmf_tgt_br" 00:17:21.312 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # true 00:17:21.312 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:21.312 Cannot find device "nvmf_tgt_br2" 00:17:21.312 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # true 00:17:21.312 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:21.312 Cannot find device "nvmf_init_br" 00:17:21.312 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # true 00:17:21.312 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:21.312 Cannot find device "nvmf_init_br2" 00:17:21.312 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # true 00:17:21.312 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:21.312 Cannot find device "nvmf_tgt_br" 00:17:21.312 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # true 00:17:21.312 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:21.312 Cannot find device "nvmf_tgt_br2" 00:17:21.312 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # true 00:17:21.312 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:21.312 Cannot find device "nvmf_br" 00:17:21.312 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # true 00:17:21.312 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:21.312 Cannot find device "nvmf_init_if" 00:17:21.312 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # true 00:17:21.312 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:21.312 Cannot find device "nvmf_init_if2" 00:17:21.312 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # true 00:17:21.312 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:21.313 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:21.313 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # true 00:17:21.313 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:21.313 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:21.313 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # true 00:17:21.313 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:21.313 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:21.313 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:21.313 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:21.313 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:21.313 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:21.313 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:21.572 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:21.572 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:21.572 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:21.572 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:21.572 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:21.572 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:21.572 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:21.572 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:21.572 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:21.572 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:21.572 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:21.572 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:21.572 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:21.572 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:21.572 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:21.572 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:21.572 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:21.572 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:21.572 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:21.572 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:21.572 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:21.572 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:21.572 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:21.572 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:21.572 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:21.572 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:21.572 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:21.572 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.082 ms 00:17:21.572 00:17:21.572 --- 10.0.0.3 ping statistics --- 00:17:21.572 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:21.572 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:17:21.572 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:21.572 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:21.572 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.046 ms 00:17:21.572 00:17:21.572 --- 10.0.0.4 ping statistics --- 00:17:21.572 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:21.572 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:17:21.572 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:21.572 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:21.572 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:17:21.572 00:17:21.572 --- 10.0.0.1 ping statistics --- 00:17:21.572 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:21.572 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:17:21.572 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:21.572 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:21.572 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 00:17:21.572 00:17:21.572 --- 10.0.0.2 ping statistics --- 00:17:21.572 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:21.572 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:17:21.573 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:21.573 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@461 -- # return 0 00:17:21.573 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:21.573 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:21.573 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:21.573 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:21.573 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:21.573 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:21.573 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:21.573 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:17:21.573 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:21.573 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:21.573 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=74111 00:17:21.573 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:21.573 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 74111 00:17:21.573 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:21.573 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 74111 ']' 00:17:21.573 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:21.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:21.573 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:21.573 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:21.573 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:21.573 19:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:21.842 [2024-12-05 19:37:15.011444] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:17:21.842 [2024-12-05 19:37:15.011530] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:21.842 [2024-12-05 19:37:15.161953] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:21.842 [2024-12-05 19:37:15.228872] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:21.842 [2024-12-05 19:37:15.228955] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:21.842 [2024-12-05 19:37:15.228982] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:21.842 [2024-12-05 19:37:15.229003] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:21.842 [2024-12-05 19:37:15.229012] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:21.842 [2024-12-05 19:37:15.230390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:21.842 [2024-12-05 19:37:15.230550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:21.842 [2024-12-05 19:37:15.230609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:21.842 [2024-12-05 19:37:15.230617] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:22.102 [2024-12-05 19:37:15.294469] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:22.102 19:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:22.102 19:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:17:22.102 19:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:22.102 19:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.102 19:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:22.102 [2024-12-05 19:37:15.379351] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:22.102 19:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.102 19:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:17:22.102 19:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:22.102 19:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:22.102 19:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:22.102 19:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.102 19:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:22.102 Malloc0 00:17:22.102 19:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.102 19:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:22.102 19:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.102 19:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:22.102 19:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.102 19:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:17:22.102 19:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.102 19:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:22.102 19:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.102 19:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:22.102 19:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.102 19:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:22.102 [2024-12-05 19:37:15.498491] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:22.102 19:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.102 19:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:17:22.102 19:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.102 19:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:22.102 19:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.102 19:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:17:22.102 19:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.102 19:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:22.102 [ 00:17:22.102 { 00:17:22.102 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:22.102 "subtype": "Discovery", 00:17:22.102 "listen_addresses": [ 00:17:22.102 { 00:17:22.102 "trtype": "TCP", 00:17:22.102 "adrfam": "IPv4", 00:17:22.102 "traddr": "10.0.0.3", 00:17:22.102 "trsvcid": "4420" 00:17:22.102 } 00:17:22.102 ], 00:17:22.102 "allow_any_host": true, 00:17:22.102 "hosts": [] 00:17:22.102 }, 00:17:22.102 { 00:17:22.102 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:22.102 "subtype": "NVMe", 00:17:22.102 "listen_addresses": [ 00:17:22.102 { 00:17:22.102 "trtype": "TCP", 00:17:22.102 "adrfam": "IPv4", 00:17:22.102 "traddr": "10.0.0.3", 00:17:22.102 "trsvcid": "4420" 00:17:22.102 } 00:17:22.102 ], 00:17:22.102 "allow_any_host": true, 00:17:22.102 "hosts": [], 00:17:22.102 "serial_number": "SPDK00000000000001", 00:17:22.102 "model_number": "SPDK bdev Controller", 00:17:22.102 "max_namespaces": 32, 00:17:22.102 "min_cntlid": 1, 00:17:22.102 "max_cntlid": 65519, 00:17:22.102 "namespaces": [ 00:17:22.102 { 00:17:22.102 "nsid": 1, 00:17:22.102 "bdev_name": "Malloc0", 00:17:22.102 "name": "Malloc0", 00:17:22.102 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:17:22.102 "eui64": "ABCDEF0123456789", 00:17:22.102 "uuid": "5eaaf4d9-8c92-4227-96d3-71b969fc38ad" 00:17:22.102 } 00:17:22.102 ] 00:17:22.102 } 00:17:22.102 ] 00:17:22.102 19:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.102 19:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:17:22.364 [2024-12-05 19:37:15.556052] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:17:22.364 [2024-12-05 19:37:15.556143] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74133 ] 00:17:22.364 [2024-12-05 19:37:15.717456] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:17:22.364 [2024-12-05 19:37:15.717518] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:17:22.364 [2024-12-05 19:37:15.717526] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:17:22.364 [2024-12-05 19:37:15.717541] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:17:22.364 [2024-12-05 19:37:15.717554] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:17:22.364 [2024-12-05 19:37:15.717873] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:17:22.364 [2024-12-05 19:37:15.717944] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1a79750 0 00:17:22.364 [2024-12-05 19:37:15.725213] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:17:22.364 [2024-12-05 19:37:15.725237] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:17:22.364 [2024-12-05 19:37:15.725259] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:17:22.364 [2024-12-05 19:37:15.725263] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:17:22.364 [2024-12-05 19:37:15.725294] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:22.364 [2024-12-05 19:37:15.725302] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:22.364 [2024-12-05 19:37:15.725307] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a79750) 00:17:22.364 [2024-12-05 19:37:15.725321] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:17:22.364 [2024-12-05 19:37:15.725354] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1add740, cid 0, qid 0 00:17:22.364 [2024-12-05 19:37:15.733149] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:22.364 [2024-12-05 19:37:15.733171] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:22.364 [2024-12-05 19:37:15.733193] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:22.364 [2024-12-05 19:37:15.733198] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1add740) on tqpair=0x1a79750 00:17:22.364 [2024-12-05 19:37:15.733208] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:17:22.364 [2024-12-05 19:37:15.733217] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:17:22.364 [2024-12-05 19:37:15.733223] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:17:22.364 [2024-12-05 19:37:15.733239] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:22.364 [2024-12-05 19:37:15.733244] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:22.364 [2024-12-05 19:37:15.733248] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a79750) 00:17:22.364 [2024-12-05 19:37:15.733258] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.364 [2024-12-05 19:37:15.733285] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1add740, cid 0, qid 0 00:17:22.364 [2024-12-05 19:37:15.733345] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:22.364 [2024-12-05 19:37:15.733352] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:22.364 [2024-12-05 19:37:15.733356] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:22.364 [2024-12-05 19:37:15.733360] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1add740) on tqpair=0x1a79750 00:17:22.364 [2024-12-05 19:37:15.733375] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:17:22.364 [2024-12-05 19:37:15.733382] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:17:22.364 [2024-12-05 19:37:15.733390] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:22.364 [2024-12-05 19:37:15.733394] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:22.364 [2024-12-05 19:37:15.733398] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a79750) 00:17:22.364 [2024-12-05 19:37:15.733405] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.364 [2024-12-05 19:37:15.733424] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1add740, cid 0, qid 0 00:17:22.364 [2024-12-05 19:37:15.733485] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:22.364 [2024-12-05 19:37:15.733492] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:22.364 [2024-12-05 19:37:15.733496] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:22.364 [2024-12-05 19:37:15.733500] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1add740) on tqpair=0x1a79750 00:17:22.364 [2024-12-05 19:37:15.733506] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:17:22.364 [2024-12-05 19:37:15.733514] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:17:22.364 [2024-12-05 19:37:15.733522] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:22.364 [2024-12-05 19:37:15.733526] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:22.364 [2024-12-05 19:37:15.733530] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a79750) 00:17:22.364 [2024-12-05 19:37:15.733537] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.364 [2024-12-05 19:37:15.733556] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1add740, cid 0, qid 0 00:17:22.364 [2024-12-05 19:37:15.733601] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:22.364 [2024-12-05 19:37:15.733608] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:22.364 [2024-12-05 19:37:15.733612] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:22.364 [2024-12-05 19:37:15.733616] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1add740) on tqpair=0x1a79750 00:17:22.364 [2024-12-05 19:37:15.733622] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:22.364 [2024-12-05 19:37:15.733632] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:22.364 [2024-12-05 19:37:15.733637] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:22.364 [2024-12-05 19:37:15.733641] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a79750) 00:17:22.364 [2024-12-05 19:37:15.733648] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.364 [2024-12-05 19:37:15.733665] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1add740, cid 0, qid 0 00:17:22.364 [2024-12-05 19:37:15.733711] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:22.364 [2024-12-05 19:37:15.733718] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:22.364 [2024-12-05 19:37:15.733721] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:22.364 [2024-12-05 19:37:15.733726] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1add740) on tqpair=0x1a79750 00:17:22.364 [2024-12-05 19:37:15.733731] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:17:22.364 [2024-12-05 19:37:15.733737] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:17:22.364 [2024-12-05 19:37:15.733745] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:22.365 [2024-12-05 19:37:15.733856] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:17:22.365 [2024-12-05 19:37:15.733862] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:22.365 [2024-12-05 19:37:15.733872] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:22.365 [2024-12-05 19:37:15.733876] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:22.365 [2024-12-05 19:37:15.733880] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a79750) 00:17:22.365 [2024-12-05 19:37:15.733887] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.365 [2024-12-05 19:37:15.733907] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1add740, cid 0, qid 0 00:17:22.365 [2024-12-05 19:37:15.733954] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:22.365 [2024-12-05 19:37:15.733966] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:22.365 [2024-12-05 19:37:15.733970] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:22.365 [2024-12-05 19:37:15.733975] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1add740) on tqpair=0x1a79750 00:17:22.365 [2024-12-05 19:37:15.733980] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:22.365 [2024-12-05 19:37:15.733991] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:22.365 [2024-12-05 19:37:15.733995] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:22.365 [2024-12-05 19:37:15.733999] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a79750) 00:17:22.365 [2024-12-05 19:37:15.734007] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.365 [2024-12-05 19:37:15.734026] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1add740, cid 0, qid 0 00:17:22.365 [2024-12-05 19:37:15.734070] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:22.365 [2024-12-05 19:37:15.734076] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:22.365 [2024-12-05 19:37:15.734080] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:22.365 [2024-12-05 19:37:15.734084] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1add740) on tqpair=0x1a79750 00:17:22.365 [2024-12-05 19:37:15.734089] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:22.365 [2024-12-05 19:37:15.734095] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:17:22.365 [2024-12-05 19:37:15.734103] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:17:22.365 [2024-12-05 19:37:15.734127] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:17:22.365 [2024-12-05 19:37:15.734140] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:22.365 [2024-12-05 19:37:15.734145] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a79750) 00:17:22.365 [2024-12-05 19:37:15.734153] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.365 [2024-12-05 19:37:15.734175] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1add740, cid 0, qid 0 00:17:22.365 [2024-12-05 19:37:15.734267] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:22.365 [2024-12-05 19:37:15.734275] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:22.365 [2024-12-05 19:37:15.734279] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:22.365 [2024-12-05 19:37:15.734283] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a79750): datao=0, datal=4096, cccid=0 00:17:22.365 [2024-12-05 19:37:15.734288] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1add740) on tqpair(0x1a79750): expected_datao=0, payload_size=4096 00:17:22.365 [2024-12-05 19:37:15.734294] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:22.365 [2024-12-05 19:37:15.734302] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:22.365 [2024-12-05 19:37:15.734314] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:22.365 [2024-12-05 19:37:15.734323] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:22.365 [2024-12-05 19:37:15.734329] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:22.365 [2024-12-05 19:37:15.734332] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:22.365 [2024-12-05 19:37:15.734336] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1add740) on tqpair=0x1a79750 00:17:22.365 [2024-12-05 19:37:15.734346] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:17:22.365 [2024-12-05 19:37:15.734352] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:17:22.365 [2024-12-05 19:37:15.734356] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:17:22.365 [2024-12-05 19:37:15.734362] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:17:22.365 [2024-12-05 19:37:15.734367] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:17:22.365 [2024-12-05 19:37:15.734372] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:17:22.365 [2024-12-05 19:37:15.734386] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:17:22.365 [2024-12-05 19:37:15.734395] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:22.365 [2024-12-05 19:37:15.734400] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:22.365 [2024-12-05 19:37:15.734404] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a79750) 00:17:22.365 [2024-12-05 19:37:15.734412] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:22.365 [2024-12-05 19:37:15.734432] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1add740, cid 0, qid 0 00:17:22.365 [2024-12-05 19:37:15.734488] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:22.365 [2024-12-05 19:37:15.734495] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:22.365 [2024-12-05 19:37:15.734499] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:22.365 [2024-12-05 19:37:15.734503] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1add740) on tqpair=0x1a79750 00:17:22.365 [2024-12-05 19:37:15.734511] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:22.365 [2024-12-05 19:37:15.734515] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:22.365 [2024-12-05 19:37:15.734519] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a79750) 00:17:22.365 [2024-12-05 19:37:15.734526] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:22.365 [2024-12-05 19:37:15.734532] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:22.365 [2024-12-05 19:37:15.734536] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:22.365 [2024-12-05 19:37:15.734540] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1a79750) 00:17:22.365 [2024-12-05 19:37:15.734546] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:22.365 [2024-12-05 19:37:15.734552] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:22.365 [2024-12-05 19:37:15.734556] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:22.365 [2024-12-05 19:37:15.734560] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1a79750) 00:17:22.365 [2024-12-05 19:37:15.734566] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:22.365 [2024-12-05 19:37:15.734572] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:22.365 [2024-12-05 19:37:15.734575] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:22.365 [2024-12-05 19:37:15.734579] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a79750) 00:17:22.365 [2024-12-05 19:37:15.734585] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:22.365 [2024-12-05 19:37:15.734590] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:17:22.365 [2024-12-05 19:37:15.734606] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:22.365 [2024-12-05 19:37:15.734614] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:22.365 [2024-12-05 19:37:15.734618] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a79750) 00:17:22.365 [2024-12-05 19:37:15.734625] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.365 [2024-12-05 19:37:15.734647] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1add740, cid 0, qid 0 00:17:22.366 [2024-12-05 19:37:15.734655] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1add8c0, cid 1, qid 0 00:17:22.366 [2024-12-05 19:37:15.734660] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1adda40, cid 2, qid 0 00:17:22.366 [2024-12-05 19:37:15.734666] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1addbc0, cid 3, qid 0 00:17:22.366 [2024-12-05 19:37:15.734671] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1addd40, cid 4, qid 0 00:17:22.366 [2024-12-05 19:37:15.734751] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:22.366 [2024-12-05 19:37:15.734758] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:22.366 [2024-12-05 19:37:15.734762] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:22.366 [2024-12-05 19:37:15.734766] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1addd40) on tqpair=0x1a79750 00:17:22.366 [2024-12-05 19:37:15.734772] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:17:22.366 [2024-12-05 19:37:15.734778] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:17:22.366 [2024-12-05 19:37:15.734790] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:22.366 [2024-12-05 19:37:15.734795] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a79750) 00:17:22.366 [2024-12-05 19:37:15.734803] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.366 [2024-12-05 19:37:15.734822] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1addd40, cid 4, qid 0 00:17:22.366 [2024-12-05 19:37:15.734896] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:22.366 [2024-12-05 19:37:15.734904] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:22.366 [2024-12-05 19:37:15.734908] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:22.366 [2024-12-05 19:37:15.734912] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a79750): datao=0, datal=4096, cccid=4 00:17:22.366 [2024-12-05 19:37:15.734917] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1addd40) on tqpair(0x1a79750): expected_datao=0, payload_size=4096 00:17:22.366 [2024-12-05 19:37:15.734921] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:22.366 [2024-12-05 19:37:15.734929] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:22.366 [2024-12-05 19:37:15.734933] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:22.366 [2024-12-05 19:37:15.734941] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:22.366 [2024-12-05 19:37:15.734948] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:22.366 [2024-12-05 19:37:15.734951] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:22.366 [2024-12-05 19:37:15.734955] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1addd40) on tqpair=0x1a79750 00:17:22.366 [2024-12-05 19:37:15.734969] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:17:22.366 [2024-12-05 19:37:15.734999] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:22.366 [2024-12-05 19:37:15.735006] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a79750) 00:17:22.366 [2024-12-05 19:37:15.735014] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.366 [2024-12-05 19:37:15.735022] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:22.366 [2024-12-05 19:37:15.735026] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:22.366 [2024-12-05 19:37:15.735030] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1a79750) 00:17:22.366 [2024-12-05 19:37:15.735037] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:17:22.366 [2024-12-05 19:37:15.735062] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1addd40, cid 4, qid 0 00:17:22.366 [2024-12-05 19:37:15.735070] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1addec0, cid 5, qid 0 00:17:22.366 [2024-12-05 19:37:15.735221] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:22.366 [2024-12-05 19:37:15.735231] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:22.366 [2024-12-05 19:37:15.735235] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:22.366 [2024-12-05 19:37:15.735239] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a79750): datao=0, datal=1024, cccid=4 00:17:22.366 [2024-12-05 19:37:15.735244] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1addd40) on tqpair(0x1a79750): expected_datao=0, payload_size=1024 00:17:22.366 [2024-12-05 19:37:15.735249] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:22.366 [2024-12-05 19:37:15.735271] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:22.366 [2024-12-05 19:37:15.735275] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:22.366 [2024-12-05 19:37:15.735282] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:22.366 [2024-12-05 19:37:15.735288] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:22.366 [2024-12-05 19:37:15.735291] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:22.366 [2024-12-05 19:37:15.735295] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1addec0) on tqpair=0x1a79750 00:17:22.366 [2024-12-05 19:37:15.735330] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:22.366 [2024-12-05 19:37:15.735339] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:22.366 [2024-12-05 19:37:15.735343] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:22.366 [2024-12-05 19:37:15.735348] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1addd40) on tqpair=0x1a79750 00:17:22.366 [2024-12-05 19:37:15.735361] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:22.366 [2024-12-05 19:37:15.735366] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a79750) 00:17:22.366 [2024-12-05 19:37:15.735374] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.366 [2024-12-05 19:37:15.735418] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1addd40, cid 4, qid 0 00:17:22.366 [2024-12-05 19:37:15.735489] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:22.366 [2024-12-05 19:37:15.735496] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:22.366 [2024-12-05 19:37:15.735500] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:22.366 [2024-12-05 19:37:15.735504] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a79750): datao=0, datal=3072, cccid=4 00:17:22.366 [2024-12-05 19:37:15.735509] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1addd40) on tqpair(0x1a79750): expected_datao=0, payload_size=3072 00:17:22.366 [2024-12-05 19:37:15.735514] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:22.366 [2024-12-05 19:37:15.735521] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:22.366 [2024-12-05 19:37:15.735525] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:22.366 [2024-12-05 19:37:15.735534] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:22.366 [2024-12-05 19:37:15.735540] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:22.366 [2024-12-05 19:37:15.735544] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:22.366 [2024-12-05 19:37:15.735548] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1addd40) on tqpair=0x1a79750 00:17:22.366 [2024-12-05 19:37:15.735559] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:22.366 [2024-12-05 19:37:15.735564] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a79750) 00:17:22.366 [2024-12-05 19:37:15.735571] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.366 [2024-12-05 19:37:15.735596] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1addd40, cid 4, qid 0 00:17:22.366 [2024-12-05 19:37:15.735660] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:22.366 [2024-12-05 19:37:15.735667] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:22.366 [2024-12-05 19:37:15.735671] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:22.366 [2024-12-05 19:37:15.735674] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a79750): datao=0, datal=8, cccid=4 00:17:22.366 [2024-12-05 19:37:15.735679] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1addd40) on tqpair(0x1a79750): expected_datao=0, payload_size=8 00:17:22.366 [2024-12-05 19:37:15.735684] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:22.366 ===================================================== 00:17:22.366 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2014-08.org.nvmexpress.discovery 00:17:22.366 ===================================================== 00:17:22.366 Controller Capabilities/Features 00:17:22.366 ================================ 00:17:22.366 Vendor ID: 0000 00:17:22.366 Subsystem Vendor ID: 0000 00:17:22.366 Serial Number: .................... 00:17:22.366 Model Number: ........................................ 00:17:22.366 Firmware Version: 25.01 00:17:22.366 Recommended Arb Burst: 0 00:17:22.366 IEEE OUI Identifier: 00 00 00 00:17:22.366 Multi-path I/O 00:17:22.366 May have multiple subsystem ports: No 00:17:22.366 May have multiple controllers: No 00:17:22.366 Associated with SR-IOV VF: No 00:17:22.366 Max Data Transfer Size: 131072 00:17:22.366 Max Number of Namespaces: 0 00:17:22.366 Max Number of I/O Queues: 1024 00:17:22.366 NVMe Specification Version (VS): 1.3 00:17:22.366 NVMe Specification Version (Identify): 1.3 00:17:22.366 Maximum Queue Entries: 128 00:17:22.366 Contiguous Queues Required: Yes 00:17:22.366 Arbitration Mechanisms Supported 00:17:22.366 Weighted Round Robin: Not Supported 00:17:22.366 Vendor Specific: Not Supported 00:17:22.366 Reset Timeout: 15000 ms 00:17:22.366 Doorbell Stride: 4 bytes 00:17:22.366 NVM Subsystem Reset: Not Supported 00:17:22.366 Command Sets Supported 00:17:22.366 NVM Command Set: Supported 00:17:22.366 Boot Partition: Not Supported 00:17:22.366 Memory Page Size Minimum: 4096 bytes 00:17:22.366 Memory Page Size Maximum: 4096 bytes 00:17:22.366 Persistent Memory Region: Not Supported 00:17:22.366 Optional Asynchronous Events Supported 00:17:22.366 Namespace Attribute Notices: Not Supported 00:17:22.366 Firmware Activation Notices: Not Supported 00:17:22.366 ANA Change Notices: Not Supported 00:17:22.366 PLE Aggregate Log Change Notices: Not Supported 00:17:22.366 LBA Status Info Alert Notices: Not Supported 00:17:22.366 EGE Aggregate Log Change Notices: Not Supported 00:17:22.366 Normal NVM Subsystem Shutdown event: Not Supported 00:17:22.367 Zone Descriptor Change Notices: Not Supported 00:17:22.367 Discovery Log Change Notices: Supported 00:17:22.367 Controller Attributes 00:17:22.367 128-bit Host Identifier: Not Supported 00:17:22.367 Non-Operational Permissive Mode: Not Supported 00:17:22.367 NVM Sets: Not Supported 00:17:22.367 Read Recovery Levels: Not Supported 00:17:22.367 Endurance Groups: Not Supported 00:17:22.367 Predictable Latency Mode: Not Supported 00:17:22.367 Traffic Based Keep ALive: Not Supported 00:17:22.367 Namespace Granularity: Not Supported 00:17:22.367 SQ Associations: Not Supported 00:17:22.367 UUID List: Not Supported 00:17:22.367 Multi-Domain Subsystem: Not Supported 00:17:22.367 Fixed Capacity Management: Not Supported 00:17:22.367 Variable Capacity Management: Not Supported 00:17:22.367 Delete Endurance Group: Not Supported 00:17:22.367 Delete NVM Set: Not Supported 00:17:22.367 Extended LBA Formats Supported: Not Supported 00:17:22.367 Flexible Data Placement Supported: Not Supported 00:17:22.367 00:17:22.367 Controller Memory Buffer Support 00:17:22.367 ================================ 00:17:22.367 Supported: No 00:17:22.367 00:17:22.367 Persistent Memory Region Support 00:17:22.367 ================================ 00:17:22.367 Supported: No 00:17:22.367 00:17:22.367 Admin Command Set Attributes 00:17:22.367 ============================ 00:17:22.367 Security Send/Receive: Not Supported 00:17:22.367 Format NVM: Not Supported 00:17:22.367 Firmware Activate/Download: Not Supported 00:17:22.367 Namespace Management: Not Supported 00:17:22.367 Device Self-Test: Not Supported 00:17:22.367 Directives: Not Supported 00:17:22.367 NVMe-MI: Not Supported 00:17:22.367 Virtualization Management: Not Supported 00:17:22.367 Doorbell Buffer Config: Not Supported 00:17:22.367 Get LBA Status Capability: Not Supported 00:17:22.367 Command & Feature Lockdown Capability: Not Supported 00:17:22.367 Abort Command Limit: 1 00:17:22.367 Async Event Request Limit: 4 00:17:22.367 Number of Firmware Slots: N/A 00:17:22.367 Firmware Slot 1 Read-Only: N/A 00:17:22.367 Firmware Activation Without Reset: N/A 00:17:22.367 Multiple Update Detection Support: N/A 00:17:22.367 Firmware Update Granularity: No Information Provided 00:17:22.367 Per-Namespace SMART Log: No 00:17:22.367 Asymmetric Namespace Access Log Page: Not Supported 00:17:22.367 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:17:22.367 Command Effects Log Page: Not Supported 00:17:22.367 Get Log Page Extended Data: Supported 00:17:22.367 Telemetry Log Pages: Not Supported 00:17:22.367 Persistent Event Log Pages: Not Supported 00:17:22.367 Supported Log Pages Log Page: May Support 00:17:22.367 Commands Supported & Effects Log Page: Not Supported 00:17:22.367 Feature Identifiers & Effects Log Page:May Support 00:17:22.367 NVMe-MI Commands & Effects Log Page: May Support 00:17:22.367 Data Area 4 for Telemetry Log: Not Supported 00:17:22.367 Error Log Page Entries Supported: 128 00:17:22.367 Keep Alive: Not Supported 00:17:22.367 00:17:22.367 NVM Command Set Attributes 00:17:22.367 ========================== 00:17:22.367 Submission Queue Entry Size 00:17:22.367 Max: 1 00:17:22.367 Min: 1 00:17:22.367 Completion Queue Entry Size 00:17:22.367 Max: 1 00:17:22.367 Min: 1 00:17:22.367 Number of Namespaces: 0 00:17:22.367 Compare Command: Not Supported 00:17:22.367 Write Uncorrectable Command: Not Supported 00:17:22.367 Dataset Management Command: Not Supported 00:17:22.367 Write Zeroes Command: Not Supported 00:17:22.367 Set Features Save Field: Not Supported 00:17:22.367 Reservations: Not Supported 00:17:22.367 Timestamp: Not Supported 00:17:22.367 Copy: Not Supported 00:17:22.367 Volatile Write Cache: Not Present 00:17:22.367 Atomic Write Unit (Normal): 1 00:17:22.367 Atomic Write Unit (PFail): 1 00:17:22.367 Atomic Compare & Write Unit: 1 00:17:22.367 Fused Compare & Write: Supported 00:17:22.367 Scatter-Gather List 00:17:22.367 SGL Command Set: Supported 00:17:22.367 SGL Keyed: Supported 00:17:22.367 SGL Bit Bucket Descriptor: Not Supported 00:17:22.367 SGL Metadata Pointer: Not Supported 00:17:22.367 Oversized SGL: Not Supported 00:17:22.367 SGL Metadata Address: Not Supported 00:17:22.367 SGL Offset: Supported 00:17:22.367 Transport SGL Data Block: Not Supported 00:17:22.367 Replay Protected Memory Block: Not Supported 00:17:22.367 00:17:22.367 Firmware Slot Information 00:17:22.367 ========================= 00:17:22.367 Active slot: 0 00:17:22.367 00:17:22.367 00:17:22.367 Error Log 00:17:22.367 ========= 00:17:22.367 00:17:22.367 Active Namespaces 00:17:22.367 ================= 00:17:22.367 Discovery Log Page 00:17:22.367 ================== 00:17:22.367 Generation Counter: 2 00:17:22.367 Number of Records: 2 00:17:22.367 Record Format: 0 00:17:22.367 00:17:22.367 Discovery Log Entry 0 00:17:22.367 ---------------------- 00:17:22.367 Transport Type: 3 (TCP) 00:17:22.367 Address Family: 1 (IPv4) 00:17:22.367 Subsystem Type: 3 (Current Discovery Subsystem) 00:17:22.367 Entry Flags: 00:17:22.367 Duplicate Returned Information: 1 00:17:22.367 Explicit Persistent Connection Support for Discovery: 1 00:17:22.367 Transport Requirements: 00:17:22.367 Secure Channel: Not Required 00:17:22.367 Port ID: 0 (0x0000) 00:17:22.367 Controller ID: 65535 (0xffff) 00:17:22.367 Admin Max SQ Size: 128 00:17:22.367 Transport Service Identifier: 4420 00:17:22.367 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:17:22.367 Transport Address: 10.0.0.3 00:17:22.367 Discovery Log Entry 1 00:17:22.367 ---------------------- 00:17:22.367 Transport Type: 3 (TCP) 00:17:22.367 Address Family: 1 (IPv4) 00:17:22.367 Subsystem Type: 2 (NVM Subsystem) 00:17:22.367 Entry Flags: 00:17:22.367 Duplicate Returned Information: 0 00:17:22.367 Explicit Persistent Connection Support for Discovery: 0 00:17:22.367 Transport Requirements: 00:17:22.367 Secure Channel: Not Required 00:17:22.367 Port ID: 0 (0x0000) 00:17:22.367 Controller ID: 65535 (0xffff) 00:17:22.367 Admin Max SQ Size: 128 00:17:22.367 Transport Service Identifier: 4420 00:17:22.367 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:17:22.367 Transport Address: 10.0.0.3 [2024-12-05 19:37:15.735691] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:22.367 [2024-12-05 19:37:15.735695] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:22.367 [2024-12-05 19:37:15.735735] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:22.367 [2024-12-05 19:37:15.735743] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:22.367 [2024-12-05 19:37:15.735747] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:22.367 [2024-12-05 19:37:15.735751] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1addd40) on tqpair=0x1a79750 00:17:22.367 [2024-12-05 19:37:15.735872] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:17:22.367 [2024-12-05 19:37:15.735891] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1add740) on tqpair=0x1a79750 00:17:22.367 [2024-12-05 19:37:15.735899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:22.367 [2024-12-05 19:37:15.735905] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1add8c0) on tqpair=0x1a79750 00:17:22.367 [2024-12-05 19:37:15.735910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:22.367 [2024-12-05 19:37:15.735915] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1adda40) on tqpair=0x1a79750 00:17:22.367 [2024-12-05 19:37:15.735920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:22.367 [2024-12-05 19:37:15.735926] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1addbc0) on tqpair=0x1a79750 00:17:22.368 [2024-12-05 19:37:15.735930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:22.368 [2024-12-05 19:37:15.735940] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:22.368 [2024-12-05 19:37:15.735945] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:22.368 [2024-12-05 19:37:15.735949] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a79750) 00:17:22.368 [2024-12-05 19:37:15.735957] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.368 [2024-12-05 19:37:15.735985] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1addbc0, cid 3, qid 0 00:17:22.368 [2024-12-05 19:37:15.736045] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:22.368 [2024-12-05 19:37:15.736053] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:22.368 [2024-12-05 19:37:15.736057] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:22.368 [2024-12-05 19:37:15.736061] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1addbc0) on tqpair=0x1a79750 00:17:22.368 [2024-12-05 19:37:15.736070] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:22.368 [2024-12-05 19:37:15.736075] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:22.368 [2024-12-05 19:37:15.736079] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a79750) 00:17:22.368 [2024-12-05 19:37:15.736086] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.368 [2024-12-05 19:37:15.736124] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1addbc0, cid 3, qid 0 00:17:22.368 [2024-12-05 19:37:15.736222] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:22.368 [2024-12-05 19:37:15.736230] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:22.368 [2024-12-05 19:37:15.736233] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:22.368 [2024-12-05 19:37:15.736238] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1addbc0) on tqpair=0x1a79750 00:17:22.368 [2024-12-05 19:37:15.736249] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:17:22.368 [2024-12-05 19:37:15.736255] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:17:22.368 [2024-12-05 19:37:15.736266] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:22.368 [2024-12-05 19:37:15.736271] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:22.368 [2024-12-05 19:37:15.736275] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a79750) 00:17:22.368 [2024-12-05 19:37:15.736283] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.368 [2024-12-05 19:37:15.736304] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1addbc0, cid 3, qid 0 00:17:22.368 [2024-12-05 19:37:15.736354] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:22.368 [2024-12-05 19:37:15.736361] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:22.368 [2024-12-05 19:37:15.736365] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:22.368 [2024-12-05 19:37:15.736369] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1addbc0) on tqpair=0x1a79750 00:17:22.368 [2024-12-05 19:37:15.736380] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:22.368 [2024-12-05 19:37:15.736385] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:22.368 [2024-12-05 19:37:15.736389] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a79750) 00:17:22.368 [2024-12-05 19:37:15.736397] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.368 [2024-12-05 19:37:15.736415] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1addbc0, cid 3, qid 0 00:17:22.368 [2024-12-05 19:37:15.736457] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:22.368 [2024-12-05 19:37:15.736464] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:22.368 [2024-12-05 19:37:15.736468] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:22.368 [2024-12-05 19:37:15.736472] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1addbc0) on tqpair=0x1a79750 00:17:22.368 [2024-12-05 19:37:15.736482] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:22.368 [2024-12-05 19:37:15.736502] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:22.368 [2024-12-05 19:37:15.736506] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a79750) 00:17:22.368 [2024-12-05 19:37:15.736513] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.368 [2024-12-05 19:37:15.736530] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1addbc0, cid 3, qid 0 00:17:22.368 [2024-12-05 19:37:15.736577] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:22.368 [2024-12-05 19:37:15.736584] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:22.368 [2024-12-05 19:37:15.736588] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:22.368 [2024-12-05 19:37:15.736592] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1addbc0) on tqpair=0x1a79750 00:17:22.368 [2024-12-05 19:37:15.736602] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:22.368 [2024-12-05 19:37:15.736606] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:22.368 [2024-12-05 19:37:15.736610] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a79750) 00:17:22.368 [2024-12-05 19:37:15.736617] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.368 [2024-12-05 19:37:15.736634] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1addbc0, cid 3, qid 0 00:17:22.368 [2024-12-05 19:37:15.736679] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:22.368 [2024-12-05 19:37:15.736685] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:22.368 [2024-12-05 19:37:15.736689] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:22.368 [2024-12-05 19:37:15.736693] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1addbc0) on tqpair=0x1a79750 00:17:22.368 [2024-12-05 19:37:15.736703] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:22.368 [2024-12-05 19:37:15.736708] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:22.368 [2024-12-05 19:37:15.736712] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a79750) 00:17:22.368 [2024-12-05 19:37:15.736719] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.368 [2024-12-05 19:37:15.736736] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1addbc0, cid 3, qid 0 00:17:22.368 [2024-12-05 19:37:15.736783] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:22.368 [2024-12-05 19:37:15.736790] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:22.368 [2024-12-05 19:37:15.736794] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:22.368 [2024-12-05 19:37:15.736798] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1addbc0) on tqpair=0x1a79750 00:17:22.368 [2024-12-05 19:37:15.736808] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:22.368 [2024-12-05 19:37:15.736813] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:22.368 [2024-12-05 19:37:15.736817] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a79750) 00:17:22.368 [2024-12-05 19:37:15.736824] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.368 [2024-12-05 19:37:15.736841] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1addbc0, cid 3, qid 0 00:17:22.368 [2024-12-05 19:37:15.736888] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:22.368 [2024-12-05 19:37:15.736895] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:22.368 [2024-12-05 19:37:15.736899] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:22.368 [2024-12-05 19:37:15.736903] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1addbc0) on tqpair=0x1a79750 00:17:22.368 [2024-12-05 19:37:15.736913] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:22.368 [2024-12-05 19:37:15.736918] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:22.368 [2024-12-05 19:37:15.736922] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a79750) 00:17:22.368 [2024-12-05 19:37:15.736929] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.368 [2024-12-05 19:37:15.736946] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1addbc0, cid 3, qid 0 00:17:22.368 [2024-12-05 19:37:15.736988] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:22.368 [2024-12-05 19:37:15.736994] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:22.368 [2024-12-05 19:37:15.736998] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:22.368 [2024-12-05 19:37:15.737002] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1addbc0) on tqpair=0x1a79750 00:17:22.368 [2024-12-05 19:37:15.737012] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:22.368 [2024-12-05 19:37:15.737017] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:22.368 [2024-12-05 19:37:15.737021] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a79750) 00:17:22.368 [2024-12-05 19:37:15.737028] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.368 [2024-12-05 19:37:15.737045] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1addbc0, cid 3, qid 0 00:17:22.368 [2024-12-05 19:37:15.737087] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:22.368 [2024-12-05 19:37:15.737094] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:22.368 [2024-12-05 19:37:15.737097] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:22.368 [2024-12-05 19:37:15.737101] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1addbc0) on tqpair=0x1a79750 00:17:22.368 [2024-12-05 19:37:15.737112] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:22.368 [2024-12-05 19:37:15.737116] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:22.368 [2024-12-05 19:37:15.737120] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a79750) 00:17:22.368 [2024-12-05 19:37:15.737127] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.368 [2024-12-05 19:37:15.741138] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1addbc0, cid 3, qid 0 00:17:22.368 [2024-12-05 19:37:15.741183] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:22.368 [2024-12-05 19:37:15.741192] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:22.368 [2024-12-05 19:37:15.741196] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:22.368 [2024-12-05 19:37:15.741200] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1addbc0) on tqpair=0x1a79750 00:17:22.368 [2024-12-05 19:37:15.741214] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:22.368 [2024-12-05 19:37:15.741220] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:22.369 [2024-12-05 19:37:15.741224] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a79750) 00:17:22.369 [2024-12-05 19:37:15.741233] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.369 [2024-12-05 19:37:15.741258] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1addbc0, cid 3, qid 0 00:17:22.369 [2024-12-05 19:37:15.741313] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:22.369 [2024-12-05 19:37:15.741320] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:22.369 [2024-12-05 19:37:15.741324] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:22.369 [2024-12-05 19:37:15.741328] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1addbc0) on tqpair=0x1a79750 00:17:22.369 [2024-12-05 19:37:15.741336] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 5 milliseconds 00:17:22.369 00:17:22.369 19:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:17:22.369 [2024-12-05 19:37:15.786732] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:17:22.369 [2024-12-05 19:37:15.786807] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74139 ] 00:17:22.635 [2024-12-05 19:37:15.944391] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:17:22.635 [2024-12-05 19:37:15.944464] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:17:22.635 [2024-12-05 19:37:15.944475] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:17:22.635 [2024-12-05 19:37:15.944489] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:17:22.635 [2024-12-05 19:37:15.944500] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:17:22.635 [2024-12-05 19:37:15.944863] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:17:22.635 [2024-12-05 19:37:15.944927] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x127e750 0 00:17:22.635 [2024-12-05 19:37:15.952177] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:17:22.635 [2024-12-05 19:37:15.952203] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:17:22.635 [2024-12-05 19:37:15.952225] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:17:22.635 [2024-12-05 19:37:15.952237] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:17:22.635 [2024-12-05 19:37:15.952268] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:22.635 [2024-12-05 19:37:15.952275] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:22.635 [2024-12-05 19:37:15.952280] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x127e750) 00:17:22.635 [2024-12-05 19:37:15.952317] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:17:22.635 [2024-12-05 19:37:15.952363] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e2740, cid 0, qid 0 00:17:22.635 [2024-12-05 19:37:15.959191] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:22.635 [2024-12-05 19:37:15.959209] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:22.635 [2024-12-05 19:37:15.959215] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:22.635 [2024-12-05 19:37:15.959220] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e2740) on tqpair=0x127e750 00:17:22.635 [2024-12-05 19:37:15.959234] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:17:22.635 [2024-12-05 19:37:15.959243] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:17:22.635 [2024-12-05 19:37:15.959250] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:17:22.635 [2024-12-05 19:37:15.959269] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:22.635 [2024-12-05 19:37:15.959275] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:22.635 [2024-12-05 19:37:15.959279] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x127e750) 00:17:22.635 [2024-12-05 19:37:15.959289] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.635 [2024-12-05 19:37:15.959317] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e2740, cid 0, qid 0 00:17:22.635 [2024-12-05 19:37:15.959402] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:22.635 [2024-12-05 19:37:15.959409] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:22.635 [2024-12-05 19:37:15.959413] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:22.635 [2024-12-05 19:37:15.959417] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e2740) on tqpair=0x127e750 00:17:22.635 [2024-12-05 19:37:15.959423] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:17:22.635 [2024-12-05 19:37:15.959447] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:17:22.635 [2024-12-05 19:37:15.959454] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:22.635 [2024-12-05 19:37:15.959459] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:22.635 [2024-12-05 19:37:15.959463] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x127e750) 00:17:22.635 [2024-12-05 19:37:15.959470] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.635 [2024-12-05 19:37:15.959489] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e2740, cid 0, qid 0 00:17:22.635 [2024-12-05 19:37:15.959539] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:22.635 [2024-12-05 19:37:15.959546] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:22.635 [2024-12-05 19:37:15.959549] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:22.635 [2024-12-05 19:37:15.959553] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e2740) on tqpair=0x127e750 00:17:22.635 [2024-12-05 19:37:15.959559] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:17:22.635 [2024-12-05 19:37:15.959568] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:17:22.635 [2024-12-05 19:37:15.959575] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:22.635 [2024-12-05 19:37:15.959580] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:22.635 [2024-12-05 19:37:15.959583] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x127e750) 00:17:22.635 [2024-12-05 19:37:15.959591] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.635 [2024-12-05 19:37:15.959608] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e2740, cid 0, qid 0 00:17:22.635 [2024-12-05 19:37:15.959652] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:22.635 [2024-12-05 19:37:15.959659] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:22.635 [2024-12-05 19:37:15.959663] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:22.635 [2024-12-05 19:37:15.959667] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e2740) on tqpair=0x127e750 00:17:22.635 [2024-12-05 19:37:15.959673] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:22.635 [2024-12-05 19:37:15.959683] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:22.635 [2024-12-05 19:37:15.959688] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:22.635 [2024-12-05 19:37:15.959691] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x127e750) 00:17:22.635 [2024-12-05 19:37:15.959699] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.635 [2024-12-05 19:37:15.959725] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e2740, cid 0, qid 0 00:17:22.635 [2024-12-05 19:37:15.959791] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:22.635 [2024-12-05 19:37:15.959798] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:22.635 [2024-12-05 19:37:15.959802] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:22.635 [2024-12-05 19:37:15.959806] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e2740) on tqpair=0x127e750 00:17:22.635 [2024-12-05 19:37:15.959811] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:17:22.635 [2024-12-05 19:37:15.959817] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:17:22.635 [2024-12-05 19:37:15.959825] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:22.635 [2024-12-05 19:37:15.959937] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:17:22.635 [2024-12-05 19:37:15.959944] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:22.635 [2024-12-05 19:37:15.959953] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:22.635 [2024-12-05 19:37:15.959958] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:22.635 [2024-12-05 19:37:15.959962] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x127e750) 00:17:22.635 [2024-12-05 19:37:15.959970] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.635 [2024-12-05 19:37:15.959998] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e2740, cid 0, qid 0 00:17:22.635 [2024-12-05 19:37:15.960048] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:22.635 [2024-12-05 19:37:15.960055] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:22.635 [2024-12-05 19:37:15.960059] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:22.635 [2024-12-05 19:37:15.960063] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e2740) on tqpair=0x127e750 00:17:22.635 [2024-12-05 19:37:15.960069] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:22.635 [2024-12-05 19:37:15.960079] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:22.635 [2024-12-05 19:37:15.960084] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:22.635 [2024-12-05 19:37:15.960088] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x127e750) 00:17:22.635 [2024-12-05 19:37:15.960096] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.635 [2024-12-05 19:37:15.960113] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e2740, cid 0, qid 0 00:17:22.635 [2024-12-05 19:37:15.960173] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:22.636 [2024-12-05 19:37:15.960183] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:22.636 [2024-12-05 19:37:15.960187] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:22.636 [2024-12-05 19:37:15.960191] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e2740) on tqpair=0x127e750 00:17:22.636 [2024-12-05 19:37:15.960196] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:22.636 [2024-12-05 19:37:15.960202] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:17:22.636 [2024-12-05 19:37:15.960210] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:17:22.636 [2024-12-05 19:37:15.960221] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:17:22.636 [2024-12-05 19:37:15.960233] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:22.636 [2024-12-05 19:37:15.960237] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x127e750) 00:17:22.636 [2024-12-05 19:37:15.960245] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.636 [2024-12-05 19:37:15.960267] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e2740, cid 0, qid 0 00:17:22.636 [2024-12-05 19:37:15.960360] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:22.636 [2024-12-05 19:37:15.960373] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:22.636 [2024-12-05 19:37:15.960377] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:22.636 [2024-12-05 19:37:15.960382] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x127e750): datao=0, datal=4096, cccid=0 00:17:22.636 [2024-12-05 19:37:15.960387] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12e2740) on tqpair(0x127e750): expected_datao=0, payload_size=4096 00:17:22.636 [2024-12-05 19:37:15.960393] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:22.636 [2024-12-05 19:37:15.960401] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:22.636 [2024-12-05 19:37:15.960406] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:22.636 [2024-12-05 19:37:15.960415] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:22.636 [2024-12-05 19:37:15.960422] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:22.636 [2024-12-05 19:37:15.960425] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:22.636 [2024-12-05 19:37:15.960429] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e2740) on tqpair=0x127e750 00:17:22.636 [2024-12-05 19:37:15.960438] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:17:22.636 [2024-12-05 19:37:15.960444] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:17:22.636 [2024-12-05 19:37:15.960449] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:17:22.636 [2024-12-05 19:37:15.960454] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:17:22.636 [2024-12-05 19:37:15.960458] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:17:22.636 [2024-12-05 19:37:15.960464] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:17:22.636 [2024-12-05 19:37:15.960479] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:17:22.636 [2024-12-05 19:37:15.960488] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:22.636 [2024-12-05 19:37:15.960493] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:22.636 [2024-12-05 19:37:15.960497] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x127e750) 00:17:22.636 [2024-12-05 19:37:15.960505] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:22.636 [2024-12-05 19:37:15.960526] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e2740, cid 0, qid 0 00:17:22.636 [2024-12-05 19:37:15.960593] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:22.636 [2024-12-05 19:37:15.960601] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:22.636 [2024-12-05 19:37:15.960604] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:22.636 [2024-12-05 19:37:15.960609] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e2740) on tqpair=0x127e750 00:17:22.636 [2024-12-05 19:37:15.960616] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:22.636 [2024-12-05 19:37:15.960621] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:22.636 [2024-12-05 19:37:15.960624] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x127e750) 00:17:22.636 [2024-12-05 19:37:15.960631] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:22.636 [2024-12-05 19:37:15.960638] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:22.636 [2024-12-05 19:37:15.960642] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:22.636 [2024-12-05 19:37:15.960645] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x127e750) 00:17:22.636 [2024-12-05 19:37:15.960651] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:22.636 [2024-12-05 19:37:15.960657] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:22.636 [2024-12-05 19:37:15.960661] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:22.636 [2024-12-05 19:37:15.960665] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x127e750) 00:17:22.636 [2024-12-05 19:37:15.960671] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:22.636 [2024-12-05 19:37:15.960677] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:22.636 [2024-12-05 19:37:15.960681] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:22.636 [2024-12-05 19:37:15.960685] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x127e750) 00:17:22.636 [2024-12-05 19:37:15.960690] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:22.636 [2024-12-05 19:37:15.960696] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:17:22.636 [2024-12-05 19:37:15.960709] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:22.636 [2024-12-05 19:37:15.960717] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:22.636 [2024-12-05 19:37:15.960721] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x127e750) 00:17:22.636 [2024-12-05 19:37:15.960728] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.636 [2024-12-05 19:37:15.960749] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e2740, cid 0, qid 0 00:17:22.636 [2024-12-05 19:37:15.960756] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e28c0, cid 1, qid 0 00:17:22.636 [2024-12-05 19:37:15.960761] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e2a40, cid 2, qid 0 00:17:22.636 [2024-12-05 19:37:15.960766] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e2bc0, cid 3, qid 0 00:17:22.636 [2024-12-05 19:37:15.960771] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e2d40, cid 4, qid 0 00:17:22.636 [2024-12-05 19:37:15.960857] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:22.636 [2024-12-05 19:37:15.960864] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:22.636 [2024-12-05 19:37:15.960868] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:22.636 [2024-12-05 19:37:15.960872] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e2d40) on tqpair=0x127e750 00:17:22.636 [2024-12-05 19:37:15.960877] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:17:22.636 [2024-12-05 19:37:15.960883] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:17:22.636 [2024-12-05 19:37:15.960891] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:17:22.636 [2024-12-05 19:37:15.960903] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:17:22.636 [2024-12-05 19:37:15.960911] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:22.636 [2024-12-05 19:37:15.960915] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:22.636 [2024-12-05 19:37:15.960919] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x127e750) 00:17:22.636 [2024-12-05 19:37:15.960926] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:22.636 [2024-12-05 19:37:15.960945] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e2d40, cid 4, qid 0 00:17:22.636 [2024-12-05 19:37:15.961018] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:22.636 [2024-12-05 19:37:15.961025] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:22.636 [2024-12-05 19:37:15.961029] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:22.636 [2024-12-05 19:37:15.961033] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e2d40) on tqpair=0x127e750 00:17:22.636 [2024-12-05 19:37:15.961100] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:17:22.636 [2024-12-05 19:37:15.961114] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:17:22.636 [2024-12-05 19:37:15.961123] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:22.636 [2024-12-05 19:37:15.961127] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x127e750) 00:17:22.636 [2024-12-05 19:37:15.961135] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.636 [2024-12-05 19:37:15.961170] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e2d40, cid 4, qid 0 00:17:22.636 [2024-12-05 19:37:15.961235] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:22.636 [2024-12-05 19:37:15.961243] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:22.636 [2024-12-05 19:37:15.961247] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:22.636 [2024-12-05 19:37:15.961251] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x127e750): datao=0, datal=4096, cccid=4 00:17:22.636 [2024-12-05 19:37:15.961255] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12e2d40) on tqpair(0x127e750): expected_datao=0, payload_size=4096 00:17:22.636 [2024-12-05 19:37:15.961260] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:22.636 [2024-12-05 19:37:15.961268] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:22.636 [2024-12-05 19:37:15.961272] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:22.636 [2024-12-05 19:37:15.961281] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:22.636 [2024-12-05 19:37:15.961287] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:22.637 [2024-12-05 19:37:15.961291] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:22.637 [2024-12-05 19:37:15.961295] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e2d40) on tqpair=0x127e750 00:17:22.637 [2024-12-05 19:37:15.961319] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:17:22.637 [2024-12-05 19:37:15.961332] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:17:22.637 [2024-12-05 19:37:15.961344] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:17:22.637 [2024-12-05 19:37:15.961367] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:22.637 [2024-12-05 19:37:15.961372] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x127e750) 00:17:22.637 [2024-12-05 19:37:15.961379] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.637 [2024-12-05 19:37:15.961400] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e2d40, cid 4, qid 0 00:17:22.637 [2024-12-05 19:37:15.961482] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:22.637 [2024-12-05 19:37:15.961489] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:22.637 [2024-12-05 19:37:15.961493] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:22.637 [2024-12-05 19:37:15.961497] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x127e750): datao=0, datal=4096, cccid=4 00:17:22.637 [2024-12-05 19:37:15.961502] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12e2d40) on tqpair(0x127e750): expected_datao=0, payload_size=4096 00:17:22.637 [2024-12-05 19:37:15.961506] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:22.637 [2024-12-05 19:37:15.961513] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:22.637 [2024-12-05 19:37:15.961518] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:22.637 [2024-12-05 19:37:15.961526] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:22.637 [2024-12-05 19:37:15.961532] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:22.637 [2024-12-05 19:37:15.961536] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:22.637 [2024-12-05 19:37:15.961540] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e2d40) on tqpair=0x127e750 00:17:22.637 [2024-12-05 19:37:15.961558] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:17:22.637 [2024-12-05 19:37:15.961570] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:17:22.637 [2024-12-05 19:37:15.961579] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:22.637 [2024-12-05 19:37:15.961583] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x127e750) 00:17:22.637 [2024-12-05 19:37:15.961590] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.637 [2024-12-05 19:37:15.961610] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e2d40, cid 4, qid 0 00:17:22.637 [2024-12-05 19:37:15.961668] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:22.637 [2024-12-05 19:37:15.961675] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:22.637 [2024-12-05 19:37:15.961679] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:22.637 [2024-12-05 19:37:15.961682] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x127e750): datao=0, datal=4096, cccid=4 00:17:22.637 [2024-12-05 19:37:15.961687] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12e2d40) on tqpair(0x127e750): expected_datao=0, payload_size=4096 00:17:22.637 [2024-12-05 19:37:15.961692] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:22.637 [2024-12-05 19:37:15.961699] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:22.637 [2024-12-05 19:37:15.961703] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:22.637 [2024-12-05 19:37:15.961711] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:22.637 [2024-12-05 19:37:15.961717] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:22.637 [2024-12-05 19:37:15.961721] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:22.637 [2024-12-05 19:37:15.961725] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e2d40) on tqpair=0x127e750 00:17:22.637 [2024-12-05 19:37:15.961734] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:17:22.637 [2024-12-05 19:37:15.961743] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:17:22.637 [2024-12-05 19:37:15.961754] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:17:22.637 [2024-12-05 19:37:15.961761] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:17:22.637 [2024-12-05 19:37:15.961766] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:17:22.637 [2024-12-05 19:37:15.961772] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:17:22.637 [2024-12-05 19:37:15.961777] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:17:22.637 [2024-12-05 19:37:15.961782] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:17:22.637 [2024-12-05 19:37:15.961788] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:17:22.637 [2024-12-05 19:37:15.961803] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:22.637 [2024-12-05 19:37:15.961808] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x127e750) 00:17:22.637 [2024-12-05 19:37:15.961815] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.637 [2024-12-05 19:37:15.961823] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:22.637 [2024-12-05 19:37:15.961827] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:22.637 [2024-12-05 19:37:15.961830] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x127e750) 00:17:22.637 [2024-12-05 19:37:15.961837] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:17:22.637 [2024-12-05 19:37:15.961861] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e2d40, cid 4, qid 0 00:17:22.637 [2024-12-05 19:37:15.961869] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e2ec0, cid 5, qid 0 00:17:22.637 [2024-12-05 19:37:15.961931] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:22.637 [2024-12-05 19:37:15.961938] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:22.637 [2024-12-05 19:37:15.961942] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:22.637 [2024-12-05 19:37:15.961946] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e2d40) on tqpair=0x127e750 00:17:22.637 [2024-12-05 19:37:15.961953] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:22.637 [2024-12-05 19:37:15.961959] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:22.637 [2024-12-05 19:37:15.961962] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:22.637 [2024-12-05 19:37:15.961966] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e2ec0) on tqpair=0x127e750 00:17:22.637 [2024-12-05 19:37:15.961977] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:22.637 [2024-12-05 19:37:15.961997] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x127e750) 00:17:22.637 [2024-12-05 19:37:15.962004] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.637 [2024-12-05 19:37:15.962022] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e2ec0, cid 5, qid 0 00:17:22.637 [2024-12-05 19:37:15.962073] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:22.637 [2024-12-05 19:37:15.962080] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:22.637 [2024-12-05 19:37:15.962083] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:22.637 [2024-12-05 19:37:15.962088] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e2ec0) on tqpair=0x127e750 00:17:22.637 [2024-12-05 19:37:15.962098] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:22.637 [2024-12-05 19:37:15.962103] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x127e750) 00:17:22.637 [2024-12-05 19:37:15.962110] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.637 [2024-12-05 19:37:15.962127] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e2ec0, cid 5, qid 0 00:17:22.637 [2024-12-05 19:37:15.962223] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:22.637 [2024-12-05 19:37:15.962234] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:22.637 [2024-12-05 19:37:15.962239] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:22.637 [2024-12-05 19:37:15.962249] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e2ec0) on tqpair=0x127e750 00:17:22.637 [2024-12-05 19:37:15.962261] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:22.637 [2024-12-05 19:37:15.962266] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x127e750) 00:17:22.637 [2024-12-05 19:37:15.962273] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.637 [2024-12-05 19:37:15.962296] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e2ec0, cid 5, qid 0 00:17:22.637 [2024-12-05 19:37:15.962362] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:22.637 [2024-12-05 19:37:15.962369] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:22.637 [2024-12-05 19:37:15.962373] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:22.637 [2024-12-05 19:37:15.962377] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e2ec0) on tqpair=0x127e750 00:17:22.637 [2024-12-05 19:37:15.962398] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:22.637 [2024-12-05 19:37:15.962404] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x127e750) 00:17:22.637 [2024-12-05 19:37:15.962412] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.637 [2024-12-05 19:37:15.962420] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:22.637 [2024-12-05 19:37:15.962424] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x127e750) 00:17:22.637 [2024-12-05 19:37:15.962430] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.637 [2024-12-05 19:37:15.962437] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:22.637 [2024-12-05 19:37:15.962441] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x127e750) 00:17:22.637 [2024-12-05 19:37:15.962448] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.637 [2024-12-05 19:37:15.962455] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:22.638 [2024-12-05 19:37:15.962459] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x127e750) 00:17:22.638 [2024-12-05 19:37:15.962466] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.638 [2024-12-05 19:37:15.962486] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e2ec0, cid 5, qid 0 00:17:22.638 [2024-12-05 19:37:15.962494] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e2d40, cid 4, qid 0 00:17:22.638 [2024-12-05 19:37:15.962498] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e3040, cid 6, qid 0 00:17:22.638 [2024-12-05 19:37:15.962503] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e31c0, cid 7, qid 0 00:17:22.638 [2024-12-05 19:37:15.962638] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:22.638 [2024-12-05 19:37:15.962656] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:22.638 [2024-12-05 19:37:15.962661] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:22.638 [2024-12-05 19:37:15.962665] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x127e750): datao=0, datal=8192, cccid=5 00:17:22.638 [2024-12-05 19:37:15.962670] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12e2ec0) on tqpair(0x127e750): expected_datao=0, payload_size=8192 00:17:22.638 [2024-12-05 19:37:15.962675] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:22.638 [2024-12-05 19:37:15.962692] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:22.638 [2024-12-05 19:37:15.962697] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:22.638 [2024-12-05 19:37:15.962703] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:22.638 [2024-12-05 19:37:15.962709] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:22.638 [2024-12-05 19:37:15.962713] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:22.638 [2024-12-05 19:37:15.962717] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x127e750): datao=0, datal=512, cccid=4 00:17:22.638 [2024-12-05 19:37:15.962721] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12e2d40) on tqpair(0x127e750): expected_datao=0, payload_size=512 00:17:22.638 [2024-12-05 19:37:15.962726] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:22.638 [2024-12-05 19:37:15.962732] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:22.638 [2024-12-05 19:37:15.962735] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:22.638 [2024-12-05 19:37:15.962741] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:22.638 [2024-12-05 19:37:15.962747] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:22.638 [2024-12-05 19:37:15.962750] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:22.638 [2024-12-05 19:37:15.962754] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x127e750): datao=0, datal=512, cccid=6 00:17:22.638 [2024-12-05 19:37:15.962758] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12e3040) on tqpair(0x127e750): expected_datao=0, payload_size=512 00:17:22.638 [2024-12-05 19:37:15.962763] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:22.638 [2024-12-05 19:37:15.962769] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:22.638 [2024-12-05 19:37:15.962773] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:22.638 [2024-12-05 19:37:15.962778] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:22.638 [2024-12-05 19:37:15.962784] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:22.638 [2024-12-05 19:37:15.962788] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:22.638 [2024-12-05 19:37:15.962791] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x127e750): datao=0, datal=4096, cccid=7 00:17:22.638 [2024-12-05 19:37:15.962796] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12e31c0) on tqpair(0x127e750): expected_datao=0, payload_size=4096 00:17:22.638 [2024-12-05 19:37:15.962800] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:22.638 [2024-12-05 19:37:15.962807] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:22.638 [2024-12-05 19:37:15.962811] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:22.638 [2024-12-05 19:37:15.962817] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:22.638 [2024-12-05 19:37:15.962822] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:22.638 [2024-12-05 19:37:15.962826] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:22.638 ===================================================== 00:17:22.638 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:17:22.638 ===================================================== 00:17:22.638 Controller Capabilities/Features 00:17:22.638 ================================ 00:17:22.638 Vendor ID: 8086 00:17:22.638 Subsystem Vendor ID: 8086 00:17:22.638 Serial Number: SPDK00000000000001 00:17:22.638 Model Number: SPDK bdev Controller 00:17:22.638 Firmware Version: 25.01 00:17:22.638 Recommended Arb Burst: 6 00:17:22.638 IEEE OUI Identifier: e4 d2 5c 00:17:22.638 Multi-path I/O 00:17:22.638 May have multiple subsystem ports: Yes 00:17:22.638 May have multiple controllers: Yes 00:17:22.638 Associated with SR-IOV VF: No 00:17:22.638 Max Data Transfer Size: 131072 00:17:22.638 Max Number of Namespaces: 32 00:17:22.638 Max Number of I/O Queues: 127 00:17:22.638 NVMe Specification Version (VS): 1.3 00:17:22.638 NVMe Specification Version (Identify): 1.3 00:17:22.638 Maximum Queue Entries: 128 00:17:22.638 Contiguous Queues Required: Yes 00:17:22.638 Arbitration Mechanisms Supported 00:17:22.638 Weighted Round Robin: Not Supported 00:17:22.638 Vendor Specific: Not Supported 00:17:22.638 Reset Timeout: 15000 ms 00:17:22.638 Doorbell Stride: 4 bytes 00:17:22.638 NVM Subsystem Reset: Not Supported 00:17:22.638 Command Sets Supported 00:17:22.638 NVM Command Set: Supported 00:17:22.638 Boot Partition: Not Supported 00:17:22.638 Memory Page Size Minimum: 4096 bytes 00:17:22.638 Memory Page Size Maximum: 4096 bytes 00:17:22.638 Persistent Memory Region: Not Supported 00:17:22.638 Optional Asynchronous Events Supported 00:17:22.638 Namespace Attribute Notices: Supported 00:17:22.638 Firmware Activation Notices: Not Supported 00:17:22.638 ANA Change Notices: Not Supported 00:17:22.638 PLE Aggregate Log Change Notices: Not Supported 00:17:22.638 LBA Status Info Alert Notices: Not Supported 00:17:22.638 EGE Aggregate Log Change Notices: Not Supported 00:17:22.638 Normal NVM Subsystem Shutdown event: Not Supported 00:17:22.638 Zone Descriptor Change Notices: Not Supported 00:17:22.638 Discovery Log Change Notices: Not Supported 00:17:22.638 Controller Attributes 00:17:22.638 128-bit Host Identifier: Supported 00:17:22.638 Non-Operational Permissive Mode: Not Supported 00:17:22.638 NVM Sets: Not Supported 00:17:22.638 Read Recovery Levels: Not Supported 00:17:22.638 Endurance Groups: Not Supported 00:17:22.638 Predictable Latency Mode: Not Supported 00:17:22.638 Traffic Based Keep ALive: Not Supported 00:17:22.638 Namespace Granularity: Not Supported 00:17:22.638 SQ Associations: Not Supported 00:17:22.638 UUID List: Not Supported 00:17:22.638 Multi-Domain Subsystem: Not Supported 00:17:22.638 Fixed Capacity Management: Not Supported 00:17:22.638 Variable Capacity Management: Not Supported 00:17:22.638 Delete Endurance Group: Not Supported 00:17:22.638 Delete NVM Set: Not Supported 00:17:22.638 Extended LBA Formats Supported: Not Supported 00:17:22.638 Flexible Data Placement Supported: Not Supported 00:17:22.638 00:17:22.638 Controller Memory Buffer Support 00:17:22.638 ================================ 00:17:22.638 Supported: No 00:17:22.638 00:17:22.638 Persistent Memory Region Support 00:17:22.638 ================================ 00:17:22.638 Supported: No 00:17:22.638 00:17:22.638 Admin Command Set Attributes 00:17:22.638 ============================ 00:17:22.638 Security Send/Receive: Not Supported 00:17:22.638 Format NVM: Not Supported 00:17:22.638 Firmware Activate/Download: Not Supported 00:17:22.638 Namespace Management: Not Supported 00:17:22.638 Device Self-Test: Not Supported 00:17:22.638 Directives: Not Supported 00:17:22.638 NVMe-MI: Not Supported 00:17:22.638 Virtualization Management: Not Supported 00:17:22.638 Doorbell Buffer Config: Not Supported 00:17:22.638 Get LBA Status Capability: Not Supported 00:17:22.638 Command & Feature Lockdown Capability: Not Supported 00:17:22.638 Abort Command Limit: 4 00:17:22.638 Async Event Request Limit: 4 00:17:22.638 Number of Firmware Slots: N/A 00:17:22.638 Firmware Slot 1 Read-Only: N/A 00:17:22.638 Firmware Activation Without Reset: [2024-12-05 19:37:15.962830] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e2ec0) on tqpair=0x127e750 00:17:22.638 [2024-12-05 19:37:15.962847] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:22.638 [2024-12-05 19:37:15.962854] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:22.638 [2024-12-05 19:37:15.962858] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:22.638 [2024-12-05 19:37:15.962862] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e2d40) on tqpair=0x127e750 00:17:22.638 [2024-12-05 19:37:15.962875] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:22.638 [2024-12-05 19:37:15.962881] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:22.638 [2024-12-05 19:37:15.962885] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:22.638 [2024-12-05 19:37:15.962889] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e3040) on tqpair=0x127e750 00:17:22.638 [2024-12-05 19:37:15.962896] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:22.638 [2024-12-05 19:37:15.962902] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:22.638 [2024-12-05 19:37:15.962906] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:22.638 [2024-12-05 19:37:15.962910] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e31c0) on tqpair=0x127e750 00:17:22.638 N/A 00:17:22.638 Multiple Update Detection Support: N/A 00:17:22.638 Firmware Update Granularity: No Information Provided 00:17:22.638 Per-Namespace SMART Log: No 00:17:22.638 Asymmetric Namespace Access Log Page: Not Supported 00:17:22.638 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:17:22.639 Command Effects Log Page: Supported 00:17:22.639 Get Log Page Extended Data: Supported 00:17:22.639 Telemetry Log Pages: Not Supported 00:17:22.639 Persistent Event Log Pages: Not Supported 00:17:22.639 Supported Log Pages Log Page: May Support 00:17:22.639 Commands Supported & Effects Log Page: Not Supported 00:17:22.639 Feature Identifiers & Effects Log Page:May Support 00:17:22.639 NVMe-MI Commands & Effects Log Page: May Support 00:17:22.639 Data Area 4 for Telemetry Log: Not Supported 00:17:22.639 Error Log Page Entries Supported: 128 00:17:22.639 Keep Alive: Supported 00:17:22.639 Keep Alive Granularity: 10000 ms 00:17:22.639 00:17:22.639 NVM Command Set Attributes 00:17:22.639 ========================== 00:17:22.639 Submission Queue Entry Size 00:17:22.639 Max: 64 00:17:22.639 Min: 64 00:17:22.639 Completion Queue Entry Size 00:17:22.639 Max: 16 00:17:22.639 Min: 16 00:17:22.639 Number of Namespaces: 32 00:17:22.639 Compare Command: Supported 00:17:22.639 Write Uncorrectable Command: Not Supported 00:17:22.639 Dataset Management Command: Supported 00:17:22.639 Write Zeroes Command: Supported 00:17:22.639 Set Features Save Field: Not Supported 00:17:22.639 Reservations: Supported 00:17:22.639 Timestamp: Not Supported 00:17:22.639 Copy: Supported 00:17:22.639 Volatile Write Cache: Present 00:17:22.639 Atomic Write Unit (Normal): 1 00:17:22.639 Atomic Write Unit (PFail): 1 00:17:22.639 Atomic Compare & Write Unit: 1 00:17:22.639 Fused Compare & Write: Supported 00:17:22.639 Scatter-Gather List 00:17:22.639 SGL Command Set: Supported 00:17:22.639 SGL Keyed: Supported 00:17:22.639 SGL Bit Bucket Descriptor: Not Supported 00:17:22.639 SGL Metadata Pointer: Not Supported 00:17:22.639 Oversized SGL: Not Supported 00:17:22.639 SGL Metadata Address: Not Supported 00:17:22.639 SGL Offset: Supported 00:17:22.639 Transport SGL Data Block: Not Supported 00:17:22.639 Replay Protected Memory Block: Not Supported 00:17:22.639 00:17:22.639 Firmware Slot Information 00:17:22.639 ========================= 00:17:22.639 Active slot: 1 00:17:22.639 Slot 1 Firmware Revision: 25.01 00:17:22.639 00:17:22.639 00:17:22.639 Commands Supported and Effects 00:17:22.639 ============================== 00:17:22.639 Admin Commands 00:17:22.639 -------------- 00:17:22.639 Get Log Page (02h): Supported 00:17:22.639 Identify (06h): Supported 00:17:22.639 Abort (08h): Supported 00:17:22.639 Set Features (09h): Supported 00:17:22.639 Get Features (0Ah): Supported 00:17:22.639 Asynchronous Event Request (0Ch): Supported 00:17:22.639 Keep Alive (18h): Supported 00:17:22.639 I/O Commands 00:17:22.639 ------------ 00:17:22.639 Flush (00h): Supported LBA-Change 00:17:22.639 Write (01h): Supported LBA-Change 00:17:22.639 Read (02h): Supported 00:17:22.639 Compare (05h): Supported 00:17:22.639 Write Zeroes (08h): Supported LBA-Change 00:17:22.639 Dataset Management (09h): Supported LBA-Change 00:17:22.639 Copy (19h): Supported LBA-Change 00:17:22.639 00:17:22.639 Error Log 00:17:22.639 ========= 00:17:22.639 00:17:22.639 Arbitration 00:17:22.639 =========== 00:17:22.639 Arbitration Burst: 1 00:17:22.639 00:17:22.639 Power Management 00:17:22.639 ================ 00:17:22.639 Number of Power States: 1 00:17:22.639 Current Power State: Power State #0 00:17:22.639 Power State #0: 00:17:22.639 Max Power: 0.00 W 00:17:22.639 Non-Operational State: Operational 00:17:22.639 Entry Latency: Not Reported 00:17:22.639 Exit Latency: Not Reported 00:17:22.639 Relative Read Throughput: 0 00:17:22.639 Relative Read Latency: 0 00:17:22.639 Relative Write Throughput: 0 00:17:22.639 Relative Write Latency: 0 00:17:22.639 Idle Power: Not Reported 00:17:22.639 Active Power: Not Reported 00:17:22.639 Non-Operational Permissive Mode: Not Supported 00:17:22.639 00:17:22.639 Health Information 00:17:22.639 ================== 00:17:22.639 Critical Warnings: 00:17:22.639 Available Spare Space: OK 00:17:22.639 Temperature: OK 00:17:22.639 Device Reliability: OK 00:17:22.639 Read Only: No 00:17:22.639 Volatile Memory Backup: OK 00:17:22.639 Current Temperature: 0 Kelvin (-273 Celsius) 00:17:22.639 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:17:22.639 Available Spare: 0% 00:17:22.639 Available Spare Threshold: 0% 00:17:22.639 Life Percentage Used:[2024-12-05 19:37:15.963033] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:22.639 [2024-12-05 19:37:15.963040] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x127e750) 00:17:22.639 [2024-12-05 19:37:15.963048] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.639 [2024-12-05 19:37:15.963073] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e31c0, cid 7, qid 0 00:17:22.639 [2024-12-05 19:37:15.963120] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:22.639 [2024-12-05 19:37:15.970185] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:22.639 [2024-12-05 19:37:15.970202] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:22.639 [2024-12-05 19:37:15.970207] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e31c0) on tqpair=0x127e750 00:17:22.639 [2024-12-05 19:37:15.970280] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:17:22.639 [2024-12-05 19:37:15.970294] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e2740) on tqpair=0x127e750 00:17:22.639 [2024-12-05 19:37:15.970302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:22.639 [2024-12-05 19:37:15.970315] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e28c0) on tqpair=0x127e750 00:17:22.639 [2024-12-05 19:37:15.970320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:22.639 [2024-12-05 19:37:15.970326] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e2a40) on tqpair=0x127e750 00:17:22.639 [2024-12-05 19:37:15.970330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:22.639 [2024-12-05 19:37:15.970351] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e2bc0) on tqpair=0x127e750 00:17:22.639 [2024-12-05 19:37:15.970356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:22.639 [2024-12-05 19:37:15.970366] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:22.639 [2024-12-05 19:37:15.970370] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:22.639 [2024-12-05 19:37:15.970374] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x127e750) 00:17:22.639 [2024-12-05 19:37:15.970383] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.639 [2024-12-05 19:37:15.970427] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e2bc0, cid 3, qid 0 00:17:22.639 [2024-12-05 19:37:15.970476] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:22.639 [2024-12-05 19:37:15.970484] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:22.639 [2024-12-05 19:37:15.970488] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:22.639 [2024-12-05 19:37:15.970492] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e2bc0) on tqpair=0x127e750 00:17:22.639 [2024-12-05 19:37:15.970509] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:22.639 [2024-12-05 19:37:15.970514] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:22.639 [2024-12-05 19:37:15.970518] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x127e750) 00:17:22.639 [2024-12-05 19:37:15.970525] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.639 [2024-12-05 19:37:15.970548] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e2bc0, cid 3, qid 0 00:17:22.639 [2024-12-05 19:37:15.970616] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:22.639 [2024-12-05 19:37:15.970623] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:22.639 [2024-12-05 19:37:15.970627] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:22.640 [2024-12-05 19:37:15.970632] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e2bc0) on tqpair=0x127e750 00:17:22.640 [2024-12-05 19:37:15.970637] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:17:22.640 [2024-12-05 19:37:15.970642] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:17:22.640 [2024-12-05 19:37:15.970654] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:22.640 [2024-12-05 19:37:15.970664] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:22.640 [2024-12-05 19:37:15.970668] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x127e750) 00:17:22.640 [2024-12-05 19:37:15.970675] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.640 [2024-12-05 19:37:15.970693] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e2bc0, cid 3, qid 0 00:17:22.640 [2024-12-05 19:37:15.970744] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:22.640 [2024-12-05 19:37:15.970757] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:22.640 [2024-12-05 19:37:15.970762] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:22.640 [2024-12-05 19:37:15.970766] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e2bc0) on tqpair=0x127e750 00:17:22.640 [2024-12-05 19:37:15.970778] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:22.640 [2024-12-05 19:37:15.970783] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:22.640 [2024-12-05 19:37:15.970787] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x127e750) 00:17:22.640 [2024-12-05 19:37:15.970795] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.640 [2024-12-05 19:37:15.970813] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e2bc0, cid 3, qid 0 00:17:22.640 [2024-12-05 19:37:15.970861] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:22.640 [2024-12-05 19:37:15.970869] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:22.640 [2024-12-05 19:37:15.970873] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:22.640 [2024-12-05 19:37:15.970877] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e2bc0) on tqpair=0x127e750 00:17:22.640 [2024-12-05 19:37:15.970888] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:22.640 [2024-12-05 19:37:15.970893] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:22.640 [2024-12-05 19:37:15.970897] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x127e750) 00:17:22.640 [2024-12-05 19:37:15.970904] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.640 [2024-12-05 19:37:15.970922] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e2bc0, cid 3, qid 0 00:17:22.640 [2024-12-05 19:37:15.970964] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:22.640 [2024-12-05 19:37:15.970971] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:22.640 [2024-12-05 19:37:15.970975] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:22.640 [2024-12-05 19:37:15.970979] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e2bc0) on tqpair=0x127e750 00:17:22.640 [2024-12-05 19:37:15.970990] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:22.640 [2024-12-05 19:37:15.970995] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:22.640 [2024-12-05 19:37:15.970999] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x127e750) 00:17:22.640 [2024-12-05 19:37:15.971006] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.640 [2024-12-05 19:37:15.971023] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e2bc0, cid 3, qid 0 00:17:22.640 [2024-12-05 19:37:15.971067] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:22.640 [2024-12-05 19:37:15.971074] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:22.640 [2024-12-05 19:37:15.971078] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:22.640 [2024-12-05 19:37:15.971082] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e2bc0) on tqpair=0x127e750 00:17:22.640 [2024-12-05 19:37:15.971093] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:22.640 [2024-12-05 19:37:15.971098] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:22.640 [2024-12-05 19:37:15.971102] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x127e750) 00:17:22.640 [2024-12-05 19:37:15.971109] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.640 [2024-12-05 19:37:15.971144] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e2bc0, cid 3, qid 0 00:17:22.640 [2024-12-05 19:37:15.971204] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:22.640 [2024-12-05 19:37:15.971212] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:22.640 [2024-12-05 19:37:15.971215] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:22.640 [2024-12-05 19:37:15.971220] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e2bc0) on tqpair=0x127e750 00:17:22.640 [2024-12-05 19:37:15.971231] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:22.640 [2024-12-05 19:37:15.971236] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:22.640 [2024-12-05 19:37:15.971240] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x127e750) 00:17:22.640 [2024-12-05 19:37:15.971247] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.640 [2024-12-05 19:37:15.971265] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e2bc0, cid 3, qid 0 00:17:22.640 [2024-12-05 19:37:15.971310] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:22.640 [2024-12-05 19:37:15.971317] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:22.640 [2024-12-05 19:37:15.971321] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:22.640 [2024-12-05 19:37:15.971325] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e2bc0) on tqpair=0x127e750 00:17:22.640 [2024-12-05 19:37:15.971336] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:22.640 [2024-12-05 19:37:15.971341] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:22.640 [2024-12-05 19:37:15.971345] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x127e750) 00:17:22.640 [2024-12-05 19:37:15.971353] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.640 [2024-12-05 19:37:15.971370] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e2bc0, cid 3, qid 0 00:17:22.640 [2024-12-05 19:37:15.971415] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:22.640 [2024-12-05 19:37:15.971422] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:22.640 [2024-12-05 19:37:15.971425] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:22.640 [2024-12-05 19:37:15.971430] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e2bc0) on tqpair=0x127e750 00:17:22.640 [2024-12-05 19:37:15.971440] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:22.640 [2024-12-05 19:37:15.971445] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:22.640 [2024-12-05 19:37:15.971449] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x127e750) 00:17:22.640 [2024-12-05 19:37:15.971457] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.640 [2024-12-05 19:37:15.971474] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e2bc0, cid 3, qid 0 00:17:22.640 [2024-12-05 19:37:15.971519] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:22.640 [2024-12-05 19:37:15.971526] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:22.640 [2024-12-05 19:37:15.971530] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:22.640 [2024-12-05 19:37:15.971534] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e2bc0) on tqpair=0x127e750 00:17:22.640 [2024-12-05 19:37:15.971545] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:22.640 [2024-12-05 19:37:15.971550] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:22.640 [2024-12-05 19:37:15.971554] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x127e750) 00:17:22.640 [2024-12-05 19:37:15.971561] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.640 [2024-12-05 19:37:15.971579] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e2bc0, cid 3, qid 0 00:17:22.640 [2024-12-05 19:37:15.971628] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:22.640 [2024-12-05 19:37:15.971635] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:22.640 [2024-12-05 19:37:15.971639] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:22.640 [2024-12-05 19:37:15.971643] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e2bc0) on tqpair=0x127e750 00:17:22.640 [2024-12-05 19:37:15.971654] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:22.640 [2024-12-05 19:37:15.971659] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:22.640 [2024-12-05 19:37:15.971663] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x127e750) 00:17:22.640 [2024-12-05 19:37:15.971670] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.640 [2024-12-05 19:37:15.971687] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e2bc0, cid 3, qid 0 00:17:22.640 [2024-12-05 19:37:15.971749] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:22.640 [2024-12-05 19:37:15.971757] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:22.640 [2024-12-05 19:37:15.971761] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:22.640 [2024-12-05 19:37:15.971766] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e2bc0) on tqpair=0x127e750 00:17:22.640 [2024-12-05 19:37:15.971777] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:22.640 [2024-12-05 19:37:15.971782] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:22.640 [2024-12-05 19:37:15.971786] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x127e750) 00:17:22.640 [2024-12-05 19:37:15.971793] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.640 [2024-12-05 19:37:15.971812] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e2bc0, cid 3, qid 0 00:17:22.640 [2024-12-05 19:37:15.971861] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:22.640 [2024-12-05 19:37:15.971868] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:22.640 [2024-12-05 19:37:15.971872] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:22.640 [2024-12-05 19:37:15.971876] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e2bc0) on tqpair=0x127e750 00:17:22.640 [2024-12-05 19:37:15.971887] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:22.640 [2024-12-05 19:37:15.971892] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:22.640 [2024-12-05 19:37:15.971896] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x127e750) 00:17:22.640 [2024-12-05 19:37:15.971904] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.640 [2024-12-05 19:37:15.971921] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e2bc0, cid 3, qid 0 00:17:22.641 [2024-12-05 19:37:15.971966] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:22.641 [2024-12-05 19:37:15.971974] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:22.641 [2024-12-05 19:37:15.971978] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:22.641 [2024-12-05 19:37:15.971983] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e2bc0) on tqpair=0x127e750 00:17:22.641 [2024-12-05 19:37:15.971993] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:22.641 [2024-12-05 19:37:15.971998] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:22.641 [2024-12-05 19:37:15.972002] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x127e750) 00:17:22.641 [2024-12-05 19:37:15.972010] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.641 [2024-12-05 19:37:15.972027] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e2bc0, cid 3, qid 0 00:17:22.641 [2024-12-05 19:37:15.972081] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:22.641 [2024-12-05 19:37:15.972088] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:22.641 [2024-12-05 19:37:15.972092] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:22.641 [2024-12-05 19:37:15.972096] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e2bc0) on tqpair=0x127e750 00:17:22.641 [2024-12-05 19:37:15.972107] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:22.641 [2024-12-05 19:37:15.972124] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:22.641 [2024-12-05 19:37:15.972129] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x127e750) 00:17:22.641 [2024-12-05 19:37:15.972137] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.641 [2024-12-05 19:37:15.972156] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e2bc0, cid 3, qid 0 00:17:22.641 [2024-12-05 19:37:15.972200] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:22.641 [2024-12-05 19:37:15.972208] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:22.641 [2024-12-05 19:37:15.972211] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:22.641 [2024-12-05 19:37:15.972216] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e2bc0) on tqpair=0x127e750 00:17:22.641 [2024-12-05 19:37:15.972227] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:22.641 [2024-12-05 19:37:15.972232] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:22.641 [2024-12-05 19:37:15.972235] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x127e750) 00:17:22.641 [2024-12-05 19:37:15.972243] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.641 [2024-12-05 19:37:15.972260] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e2bc0, cid 3, qid 0 00:17:22.641 [2024-12-05 19:37:15.972305] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:22.641 [2024-12-05 19:37:15.972313] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:22.641 [2024-12-05 19:37:15.972316] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:22.641 [2024-12-05 19:37:15.972321] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e2bc0) on tqpair=0x127e750 00:17:22.641 [2024-12-05 19:37:15.972331] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:22.641 [2024-12-05 19:37:15.972336] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:22.641 [2024-12-05 19:37:15.972340] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x127e750) 00:17:22.641 [2024-12-05 19:37:15.972348] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.641 [2024-12-05 19:37:15.972365] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e2bc0, cid 3, qid 0 00:17:22.641 [2024-12-05 19:37:15.972410] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:22.641 [2024-12-05 19:37:15.972417] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:22.641 [2024-12-05 19:37:15.972421] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:22.641 [2024-12-05 19:37:15.972425] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e2bc0) on tqpair=0x127e750 00:17:22.641 [2024-12-05 19:37:15.972436] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:22.641 [2024-12-05 19:37:15.972441] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:22.641 [2024-12-05 19:37:15.972445] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x127e750) 00:17:22.641 [2024-12-05 19:37:15.972452] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.641 [2024-12-05 19:37:15.972469] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e2bc0, cid 3, qid 0 00:17:22.641 [2024-12-05 19:37:15.972517] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:22.641 [2024-12-05 19:37:15.972524] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:22.641 [2024-12-05 19:37:15.972528] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:22.641 [2024-12-05 19:37:15.972532] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e2bc0) on tqpair=0x127e750 00:17:22.641 [2024-12-05 19:37:15.972543] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:22.641 [2024-12-05 19:37:15.972547] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:22.641 [2024-12-05 19:37:15.972551] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x127e750) 00:17:22.641 [2024-12-05 19:37:15.972558] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.641 [2024-12-05 19:37:15.972576] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e2bc0, cid 3, qid 0 00:17:22.641 [2024-12-05 19:37:15.972618] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:22.641 [2024-12-05 19:37:15.972634] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:22.641 [2024-12-05 19:37:15.972639] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:22.641 [2024-12-05 19:37:15.972643] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e2bc0) on tqpair=0x127e750 00:17:22.641 [2024-12-05 19:37:15.972655] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:22.641 [2024-12-05 19:37:15.972660] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:22.641 [2024-12-05 19:37:15.972664] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x127e750) 00:17:22.641 [2024-12-05 19:37:15.972671] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.641 [2024-12-05 19:37:15.972690] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e2bc0, cid 3, qid 0 00:17:22.641 [2024-12-05 19:37:15.972735] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:22.641 [2024-12-05 19:37:15.972742] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:22.641 [2024-12-05 19:37:15.972746] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:22.641 [2024-12-05 19:37:15.972750] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e2bc0) on tqpair=0x127e750 00:17:22.641 [2024-12-05 19:37:15.972761] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:22.641 [2024-12-05 19:37:15.972766] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:22.641 [2024-12-05 19:37:15.972770] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x127e750) 00:17:22.641 [2024-12-05 19:37:15.972777] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.641 [2024-12-05 19:37:15.972794] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e2bc0, cid 3, qid 0 00:17:22.641 [2024-12-05 19:37:15.972847] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:22.641 [2024-12-05 19:37:15.972854] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:22.641 [2024-12-05 19:37:15.972858] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:22.641 [2024-12-05 19:37:15.972862] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e2bc0) on tqpair=0x127e750 00:17:22.641 [2024-12-05 19:37:15.972873] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:22.641 [2024-12-05 19:37:15.972878] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:22.641 [2024-12-05 19:37:15.972882] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x127e750) 00:17:22.641 [2024-12-05 19:37:15.972889] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.641 [2024-12-05 19:37:15.972906] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e2bc0, cid 3, qid 0 00:17:22.641 [2024-12-05 19:37:15.972948] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:22.641 [2024-12-05 19:37:15.972956] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:22.641 [2024-12-05 19:37:15.972959] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:22.641 [2024-12-05 19:37:15.972964] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e2bc0) on tqpair=0x127e750 00:17:22.641 [2024-12-05 19:37:15.972974] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:22.641 [2024-12-05 19:37:15.972979] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:22.641 [2024-12-05 19:37:15.972983] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x127e750) 00:17:22.641 [2024-12-05 19:37:15.972991] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.641 [2024-12-05 19:37:15.973007] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e2bc0, cid 3, qid 0 00:17:22.641 [2024-12-05 19:37:15.973053] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:22.641 [2024-12-05 19:37:15.973060] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:22.641 [2024-12-05 19:37:15.973063] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:22.641 [2024-12-05 19:37:15.973067] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e2bc0) on tqpair=0x127e750 00:17:22.641 [2024-12-05 19:37:15.973078] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:22.641 [2024-12-05 19:37:15.973083] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:22.641 [2024-12-05 19:37:15.973087] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x127e750) 00:17:22.641 [2024-12-05 19:37:15.973094] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.641 [2024-12-05 19:37:15.973124] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e2bc0, cid 3, qid 0 00:17:22.641 [2024-12-05 19:37:15.973169] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:22.641 [2024-12-05 19:37:15.973176] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:22.641 [2024-12-05 19:37:15.973180] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:22.641 [2024-12-05 19:37:15.973184] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e2bc0) on tqpair=0x127e750 00:17:22.641 [2024-12-05 19:37:15.973195] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:22.641 [2024-12-05 19:37:15.973200] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:22.641 [2024-12-05 19:37:15.973204] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x127e750) 00:17:22.642 [2024-12-05 19:37:15.973212] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.642 [2024-12-05 19:37:15.973231] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e2bc0, cid 3, qid 0 00:17:22.642 [2024-12-05 19:37:15.973281] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:22.642 [2024-12-05 19:37:15.973289] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:22.642 [2024-12-05 19:37:15.973293] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:22.642 [2024-12-05 19:37:15.973297] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e2bc0) on tqpair=0x127e750 00:17:22.642 [2024-12-05 19:37:15.973317] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:22.642 [2024-12-05 19:37:15.973322] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:22.642 [2024-12-05 19:37:15.973326] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x127e750) 00:17:22.642 [2024-12-05 19:37:15.973333] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.642 [2024-12-05 19:37:15.973351] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e2bc0, cid 3, qid 0 00:17:22.642 [2024-12-05 19:37:15.973396] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:22.642 [2024-12-05 19:37:15.973403] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:22.642 [2024-12-05 19:37:15.973407] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:22.642 [2024-12-05 19:37:15.973411] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e2bc0) on tqpair=0x127e750 00:17:22.642 [2024-12-05 19:37:15.973422] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:22.642 [2024-12-05 19:37:15.973427] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:22.642 [2024-12-05 19:37:15.973430] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x127e750) 00:17:22.642 [2024-12-05 19:37:15.973438] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.642 [2024-12-05 19:37:15.973455] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e2bc0, cid 3, qid 0 00:17:22.642 [2024-12-05 19:37:15.973500] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:22.642 [2024-12-05 19:37:15.973507] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:22.642 [2024-12-05 19:37:15.973511] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:22.642 [2024-12-05 19:37:15.973515] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e2bc0) on tqpair=0x127e750 00:17:22.642 [2024-12-05 19:37:15.973540] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:22.642 [2024-12-05 19:37:15.973545] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:22.642 [2024-12-05 19:37:15.973549] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x127e750) 00:17:22.642 [2024-12-05 19:37:15.973556] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.642 [2024-12-05 19:37:15.973573] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e2bc0, cid 3, qid 0 00:17:22.642 [2024-12-05 19:37:15.973625] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:22.642 [2024-12-05 19:37:15.973632] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:22.642 [2024-12-05 19:37:15.973636] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:22.642 [2024-12-05 19:37:15.973640] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e2bc0) on tqpair=0x127e750 00:17:22.642 [2024-12-05 19:37:15.973650] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:22.642 [2024-12-05 19:37:15.973655] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:22.642 [2024-12-05 19:37:15.973659] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x127e750) 00:17:22.642 [2024-12-05 19:37:15.973666] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.642 [2024-12-05 19:37:15.973692] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e2bc0, cid 3, qid 0 00:17:22.642 [2024-12-05 19:37:15.973739] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:22.642 [2024-12-05 19:37:15.973746] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:22.642 [2024-12-05 19:37:15.973750] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:22.642 [2024-12-05 19:37:15.973754] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e2bc0) on tqpair=0x127e750 00:17:22.642 [2024-12-05 19:37:15.973764] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:22.642 [2024-12-05 19:37:15.973769] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:22.642 [2024-12-05 19:37:15.973773] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x127e750) 00:17:22.642 [2024-12-05 19:37:15.973780] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.642 [2024-12-05 19:37:15.973797] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e2bc0, cid 3, qid 0 00:17:22.642 [2024-12-05 19:37:15.973841] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:22.642 [2024-12-05 19:37:15.973848] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:22.642 [2024-12-05 19:37:15.973851] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:22.642 [2024-12-05 19:37:15.973855] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e2bc0) on tqpair=0x127e750 00:17:22.642 [2024-12-05 19:37:15.973866] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:22.642 [2024-12-05 19:37:15.973871] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:22.642 [2024-12-05 19:37:15.973875] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x127e750) 00:17:22.642 [2024-12-05 19:37:15.973882] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.642 [2024-12-05 19:37:15.973898] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e2bc0, cid 3, qid 0 00:17:22.642 [2024-12-05 19:37:15.973939] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:22.642 [2024-12-05 19:37:15.973951] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:22.642 [2024-12-05 19:37:15.973955] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:22.642 [2024-12-05 19:37:15.973959] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e2bc0) on tqpair=0x127e750 00:17:22.642 [2024-12-05 19:37:15.973970] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:22.642 [2024-12-05 19:37:15.973992] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:22.642 [2024-12-05 19:37:15.973996] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x127e750) 00:17:22.642 [2024-12-05 19:37:15.974003] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.642 [2024-12-05 19:37:15.974021] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e2bc0, cid 3, qid 0 00:17:22.642 [2024-12-05 19:37:15.974067] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:22.642 [2024-12-05 19:37:15.974075] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:22.642 [2024-12-05 19:37:15.974078] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:22.642 [2024-12-05 19:37:15.974083] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e2bc0) on tqpair=0x127e750 00:17:22.642 [2024-12-05 19:37:15.974093] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:22.642 [2024-12-05 19:37:15.974098] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:22.642 [2024-12-05 19:37:15.974102] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x127e750) 00:17:22.642 [2024-12-05 19:37:15.974109] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.642 [2024-12-05 19:37:15.978151] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e2bc0, cid 3, qid 0 00:17:22.642 [2024-12-05 19:37:15.978178] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:22.642 [2024-12-05 19:37:15.978186] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:22.642 [2024-12-05 19:37:15.978191] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:22.642 [2024-12-05 19:37:15.978195] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e2bc0) on tqpair=0x127e750 00:17:22.642 [2024-12-05 19:37:15.978210] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:22.642 [2024-12-05 19:37:15.978216] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:22.642 [2024-12-05 19:37:15.978220] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x127e750) 00:17:22.642 [2024-12-05 19:37:15.978229] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.642 [2024-12-05 19:37:15.978254] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e2bc0, cid 3, qid 0 00:17:22.642 [2024-12-05 19:37:15.978307] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:22.642 [2024-12-05 19:37:15.978314] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:22.642 [2024-12-05 19:37:15.978318] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:22.642 [2024-12-05 19:37:15.978323] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e2bc0) on tqpair=0x127e750 00:17:22.642 [2024-12-05 19:37:15.978331] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 7 milliseconds 00:17:22.642 0% 00:17:22.642 Data Units Read: 0 00:17:22.642 Data Units Written: 0 00:17:22.642 Host Read Commands: 0 00:17:22.642 Host Write Commands: 0 00:17:22.642 Controller Busy Time: 0 minutes 00:17:22.642 Power Cycles: 0 00:17:22.642 Power On Hours: 0 hours 00:17:22.642 Unsafe Shutdowns: 0 00:17:22.642 Unrecoverable Media Errors: 0 00:17:22.642 Lifetime Error Log Entries: 0 00:17:22.642 Warning Temperature Time: 0 minutes 00:17:22.642 Critical Temperature Time: 0 minutes 00:17:22.642 00:17:22.642 Number of Queues 00:17:22.642 ================ 00:17:22.642 Number of I/O Submission Queues: 127 00:17:22.642 Number of I/O Completion Queues: 127 00:17:22.642 00:17:22.642 Active Namespaces 00:17:22.642 ================= 00:17:22.642 Namespace ID:1 00:17:22.642 Error Recovery Timeout: Unlimited 00:17:22.642 Command Set Identifier: NVM (00h) 00:17:22.642 Deallocate: Supported 00:17:22.642 Deallocated/Unwritten Error: Not Supported 00:17:22.642 Deallocated Read Value: Unknown 00:17:22.642 Deallocate in Write Zeroes: Not Supported 00:17:22.642 Deallocated Guard Field: 0xFFFF 00:17:22.642 Flush: Supported 00:17:22.642 Reservation: Supported 00:17:22.642 Namespace Sharing Capabilities: Multiple Controllers 00:17:22.643 Size (in LBAs): 131072 (0GiB) 00:17:22.643 Capacity (in LBAs): 131072 (0GiB) 00:17:22.643 Utilization (in LBAs): 131072 (0GiB) 00:17:22.643 NGUID: ABCDEF0123456789ABCDEF0123456789 00:17:22.643 EUI64: ABCDEF0123456789 00:17:22.643 UUID: 5eaaf4d9-8c92-4227-96d3-71b969fc38ad 00:17:22.643 Thin Provisioning: Not Supported 00:17:22.643 Per-NS Atomic Units: Yes 00:17:22.643 Atomic Boundary Size (Normal): 0 00:17:22.643 Atomic Boundary Size (PFail): 0 00:17:22.643 Atomic Boundary Offset: 0 00:17:22.643 Maximum Single Source Range Length: 65535 00:17:22.643 Maximum Copy Length: 65535 00:17:22.643 Maximum Source Range Count: 1 00:17:22.643 NGUID/EUI64 Never Reused: No 00:17:22.643 Namespace Write Protected: No 00:17:22.643 Number of LBA Formats: 1 00:17:22.643 Current LBA Format: LBA Format #00 00:17:22.643 LBA Format #00: Data Size: 512 Metadata Size: 0 00:17:22.643 00:17:22.643 19:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:17:22.643 19:37:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:22.643 19:37:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.643 19:37:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:22.643 19:37:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.643 19:37:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:17:22.643 19:37:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:17:22.643 19:37:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:22.643 19:37:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:17:22.643 19:37:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:22.643 19:37:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:17:22.643 19:37:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:22.643 19:37:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:22.643 rmmod nvme_tcp 00:17:22.901 rmmod nvme_fabrics 00:17:22.902 rmmod nvme_keyring 00:17:22.902 19:37:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:22.902 19:37:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:17:22.902 19:37:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:17:22.902 19:37:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 74111 ']' 00:17:22.902 19:37:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 74111 00:17:22.902 19:37:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 74111 ']' 00:17:22.902 19:37:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 74111 00:17:22.902 19:37:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:17:22.902 19:37:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:22.902 19:37:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74111 00:17:22.902 19:37:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:22.902 killing process with pid 74111 00:17:22.902 19:37:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:22.902 19:37:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74111' 00:17:22.902 19:37:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 74111 00:17:22.902 19:37:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 74111 00:17:23.161 19:37:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:23.161 19:37:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:23.161 19:37:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:23.161 19:37:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:17:23.161 19:37:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:17:23.161 19:37:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:23.161 19:37:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:17:23.161 19:37:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:23.161 19:37:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:23.161 19:37:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:23.161 19:37:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:23.161 19:37:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:23.161 19:37:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:23.161 19:37:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:23.161 19:37:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:23.161 19:37:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:23.161 19:37:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:23.161 19:37:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:23.161 19:37:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:23.161 19:37:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:23.161 19:37:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:23.161 19:37:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:23.161 19:37:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:23.161 19:37:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:23.161 19:37:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:23.161 19:37:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:23.444 19:37:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@300 -- # return 0 00:17:23.444 00:17:23.444 real 0m2.303s 00:17:23.444 user 0m4.753s 00:17:23.444 sys 0m0.768s 00:17:23.444 19:37:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:23.444 19:37:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:23.444 ************************************ 00:17:23.444 END TEST nvmf_identify 00:17:23.444 ************************************ 00:17:23.444 19:37:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:17:23.444 19:37:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:23.444 19:37:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:23.444 19:37:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.444 ************************************ 00:17:23.444 START TEST nvmf_perf 00:17:23.444 ************************************ 00:17:23.444 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:17:23.444 * Looking for test storage... 00:17:23.444 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:23.444 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:23.444 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lcov --version 00:17:23.444 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:23.444 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:23.444 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:23.444 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:23.444 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:23.444 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:17:23.444 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:17:23.445 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:17:23.445 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:17:23.445 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:17:23.445 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:17:23.445 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:17:23.445 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:23.445 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:17:23.445 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:17:23.445 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:23.445 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:23.445 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:17:23.445 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:17:23.445 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:23.445 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:17:23.445 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:17:23.445 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:17:23.445 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:17:23.445 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:23.445 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:17:23.445 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:17:23.445 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:23.445 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:23.445 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:17:23.445 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:23.445 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:23.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:23.445 --rc genhtml_branch_coverage=1 00:17:23.445 --rc genhtml_function_coverage=1 00:17:23.445 --rc genhtml_legend=1 00:17:23.445 --rc geninfo_all_blocks=1 00:17:23.445 --rc geninfo_unexecuted_blocks=1 00:17:23.445 00:17:23.445 ' 00:17:23.445 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:23.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:23.445 --rc genhtml_branch_coverage=1 00:17:23.445 --rc genhtml_function_coverage=1 00:17:23.445 --rc genhtml_legend=1 00:17:23.445 --rc geninfo_all_blocks=1 00:17:23.445 --rc geninfo_unexecuted_blocks=1 00:17:23.445 00:17:23.445 ' 00:17:23.445 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:23.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:23.445 --rc genhtml_branch_coverage=1 00:17:23.445 --rc genhtml_function_coverage=1 00:17:23.445 --rc genhtml_legend=1 00:17:23.445 --rc geninfo_all_blocks=1 00:17:23.445 --rc geninfo_unexecuted_blocks=1 00:17:23.445 00:17:23.445 ' 00:17:23.445 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:23.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:23.445 --rc genhtml_branch_coverage=1 00:17:23.445 --rc genhtml_function_coverage=1 00:17:23.445 --rc genhtml_legend=1 00:17:23.445 --rc geninfo_all_blocks=1 00:17:23.445 --rc geninfo_unexecuted_blocks=1 00:17:23.445 00:17:23.445 ' 00:17:23.445 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:23.445 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:17:23.445 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:23.445 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:23.445 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:23.445 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:23.445 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:23.445 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:23.445 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:23.445 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:23.445 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:23.445 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:23.445 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:17:23.445 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:17:23.445 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:23.445 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:23.445 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:23.445 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:23.445 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:23.445 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:17:23.445 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:23.445 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:23.445 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:23.445 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:23.445 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:23.445 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:23.445 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:17:23.445 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:23.445 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:17:23.445 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:23.445 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:23.445 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:23.445 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:23.445 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:23.445 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:23.445 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:23.445 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:23.445 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:23.445 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:23.708 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:23.708 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:23.708 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:23.708 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:17:23.708 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:23.708 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:23.708 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:23.708 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:23.708 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:23.708 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:23.708 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:23.708 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:23.708 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:23.708 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:23.708 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:23.708 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:23.708 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:23.708 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:23.708 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:23.708 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:23.708 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:23.708 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:23.708 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:23.708 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:23.708 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:23.708 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:23.708 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:23.708 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:23.708 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:23.708 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:23.708 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:23.708 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:23.708 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:23.708 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:23.708 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:23.708 Cannot find device "nvmf_init_br" 00:17:23.708 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # true 00:17:23.708 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:23.708 Cannot find device "nvmf_init_br2" 00:17:23.708 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # true 00:17:23.708 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:23.708 Cannot find device "nvmf_tgt_br" 00:17:23.708 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # true 00:17:23.708 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:23.708 Cannot find device "nvmf_tgt_br2" 00:17:23.708 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # true 00:17:23.708 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:23.708 Cannot find device "nvmf_init_br" 00:17:23.708 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # true 00:17:23.708 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:23.708 Cannot find device "nvmf_init_br2" 00:17:23.708 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # true 00:17:23.708 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:23.708 Cannot find device "nvmf_tgt_br" 00:17:23.708 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # true 00:17:23.708 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:23.708 Cannot find device "nvmf_tgt_br2" 00:17:23.708 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # true 00:17:23.708 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:23.708 Cannot find device "nvmf_br" 00:17:23.708 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # true 00:17:23.708 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:23.708 Cannot find device "nvmf_init_if" 00:17:23.708 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # true 00:17:23.708 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:23.708 Cannot find device "nvmf_init_if2" 00:17:23.708 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # true 00:17:23.708 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:23.708 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:23.708 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # true 00:17:23.708 19:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:23.708 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:23.708 19:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # true 00:17:23.708 19:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:23.708 19:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:23.708 19:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:23.708 19:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:23.708 19:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:23.708 19:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:23.708 19:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:23.708 19:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:23.708 19:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:23.708 19:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:23.708 19:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:23.708 19:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:23.709 19:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:23.709 19:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:23.709 19:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:23.709 19:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:23.709 19:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:23.709 19:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:23.709 19:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:23.968 19:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:23.968 19:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:23.968 19:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:23.968 19:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:23.968 19:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:23.968 19:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:23.968 19:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:23.968 19:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:23.968 19:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:23.968 19:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:23.968 19:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:23.968 19:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:23.968 19:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:23.968 19:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:23.968 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:23.968 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:17:23.968 00:17:23.968 --- 10.0.0.3 ping statistics --- 00:17:23.968 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:23.968 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:17:23.968 19:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:23.968 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:23.968 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.058 ms 00:17:23.968 00:17:23.968 --- 10.0.0.4 ping statistics --- 00:17:23.968 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:23.968 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:17:23.968 19:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:23.968 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:23.968 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:17:23.968 00:17:23.968 --- 10.0.0.1 ping statistics --- 00:17:23.968 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:23.968 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:17:23.968 19:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:23.968 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:23.968 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:17:23.968 00:17:23.968 --- 10.0.0.2 ping statistics --- 00:17:23.968 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:23.968 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:17:23.968 19:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:23.968 19:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@461 -- # return 0 00:17:23.968 19:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:23.968 19:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:23.968 19:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:23.968 19:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:23.968 19:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:23.968 19:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:23.968 19:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:23.968 19:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:17:23.968 19:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:23.968 19:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:23.968 19:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:23.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:23.968 19:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=74363 00:17:23.968 19:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 74363 00:17:23.968 19:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 74363 ']' 00:17:23.968 19:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:23.968 19:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:23.968 19:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:23.968 19:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:23.968 19:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:23.968 19:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:23.968 [2024-12-05 19:37:17.331072] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:17:23.968 [2024-12-05 19:37:17.331213] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:24.227 [2024-12-05 19:37:17.488257] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:24.227 [2024-12-05 19:37:17.546576] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:24.227 [2024-12-05 19:37:17.546651] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:24.227 [2024-12-05 19:37:17.546669] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:24.227 [2024-12-05 19:37:17.546680] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:24.227 [2024-12-05 19:37:17.546689] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:24.227 [2024-12-05 19:37:17.548151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:24.227 [2024-12-05 19:37:17.548243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:24.227 [2024-12-05 19:37:17.548373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:24.227 [2024-12-05 19:37:17.548379] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:24.227 [2024-12-05 19:37:17.608922] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:24.486 19:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:24.486 19:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:17:24.486 19:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:24.486 19:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:24.486 19:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:24.486 19:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:24.487 19:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:17:24.487 19:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:17:25.053 19:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:17:25.053 19:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:17:25.312 19:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:17:25.312 19:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:25.568 19:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:17:25.568 19:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:17:25.568 19:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:17:25.568 19:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:17:25.568 19:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:25.826 [2024-12-05 19:37:19.061206] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:25.826 19:37:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:26.084 19:37:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:17:26.084 19:37:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:26.342 19:37:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:17:26.342 19:37:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:17:26.601 19:37:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:26.859 [2024-12-05 19:37:20.154434] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:26.859 19:37:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:17:27.118 19:37:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:17:27.118 19:37:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:17:27.118 19:37:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:17:27.118 19:37:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:17:28.494 Initializing NVMe Controllers 00:17:28.494 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:17:28.494 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:17:28.494 Initialization complete. Launching workers. 00:17:28.494 ======================================================== 00:17:28.494 Latency(us) 00:17:28.494 Device Information : IOPS MiB/s Average min max 00:17:28.494 PCIE (0000:00:10.0) NSID 1 from core 0: 22525.97 87.99 1420.65 329.53 9902.29 00:17:28.494 ======================================================== 00:17:28.494 Total : 22525.97 87.99 1420.65 329.53 9902.29 00:17:28.494 00:17:28.494 19:37:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:17:29.461 Initializing NVMe Controllers 00:17:29.461 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:17:29.461 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:29.461 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:29.461 Initialization complete. Launching workers. 00:17:29.461 ======================================================== 00:17:29.461 Latency(us) 00:17:29.461 Device Information : IOPS MiB/s Average min max 00:17:29.461 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3458.48 13.51 288.85 107.57 7224.58 00:17:29.461 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 123.87 0.48 8129.24 6008.80 12011.72 00:17:29.461 ======================================================== 00:17:29.461 Total : 3582.35 13.99 559.97 107.57 12011.72 00:17:29.461 00:17:29.461 19:37:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:17:30.853 Initializing NVMe Controllers 00:17:30.853 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:17:30.853 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:30.853 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:30.853 Initialization complete. Launching workers. 00:17:30.853 ======================================================== 00:17:30.853 Latency(us) 00:17:30.853 Device Information : IOPS MiB/s Average min max 00:17:30.853 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8435.85 32.95 3794.74 785.21 8347.27 00:17:30.853 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4008.12 15.66 8029.06 4909.15 16523.18 00:17:30.853 ======================================================== 00:17:30.853 Total : 12443.97 48.61 5158.59 785.21 16523.18 00:17:30.853 00:17:30.853 19:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:17:30.853 19:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:17:33.377 Initializing NVMe Controllers 00:17:33.377 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:17:33.377 Controller IO queue size 128, less than required. 00:17:33.377 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:33.377 Controller IO queue size 128, less than required. 00:17:33.377 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:33.377 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:33.377 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:33.377 Initialization complete. Launching workers. 00:17:33.377 ======================================================== 00:17:33.377 Latency(us) 00:17:33.377 Device Information : IOPS MiB/s Average min max 00:17:33.377 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1804.55 451.14 72041.60 40163.00 110861.72 00:17:33.377 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 692.79 173.20 195321.73 48910.32 314033.01 00:17:33.377 ======================================================== 00:17:33.377 Total : 2497.34 624.33 106240.94 40163.00 314033.01 00:17:33.377 00:17:33.635 19:37:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0xf -P 4 00:17:33.894 Initializing NVMe Controllers 00:17:33.894 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:17:33.894 Controller IO queue size 128, less than required. 00:17:33.894 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:33.894 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:17:33.894 Controller IO queue size 128, less than required. 00:17:33.894 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:33.894 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:17:33.894 WARNING: Some requested NVMe devices were skipped 00:17:33.894 No valid NVMe controllers or AIO or URING devices found 00:17:33.894 19:37:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' --transport-stat 00:17:36.438 Initializing NVMe Controllers 00:17:36.438 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:17:36.438 Controller IO queue size 128, less than required. 00:17:36.438 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:36.438 Controller IO queue size 128, less than required. 00:17:36.438 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:36.438 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:36.438 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:36.438 Initialization complete. Launching workers. 00:17:36.438 00:17:36.438 ==================== 00:17:36.438 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:17:36.438 TCP transport: 00:17:36.438 polls: 10427 00:17:36.438 idle_polls: 6952 00:17:36.439 sock_completions: 3475 00:17:36.439 nvme_completions: 6297 00:17:36.439 submitted_requests: 9428 00:17:36.439 queued_requests: 1 00:17:36.439 00:17:36.439 ==================== 00:17:36.439 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:17:36.439 TCP transport: 00:17:36.439 polls: 10814 00:17:36.439 idle_polls: 6942 00:17:36.439 sock_completions: 3872 00:17:36.439 nvme_completions: 6829 00:17:36.439 submitted_requests: 10292 00:17:36.439 queued_requests: 1 00:17:36.439 ======================================================== 00:17:36.439 Latency(us) 00:17:36.439 Device Information : IOPS MiB/s Average min max 00:17:36.439 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1573.76 393.44 83448.45 38966.95 136248.91 00:17:36.439 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1706.74 426.69 75721.48 36508.36 129770.30 00:17:36.439 ======================================================== 00:17:36.439 Total : 3280.50 820.13 79428.35 36508.36 136248.91 00:17:36.439 00:17:36.439 19:37:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:17:36.439 19:37:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:36.708 19:37:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:17:36.708 19:37:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:17:36.708 19:37:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:17:36.708 19:37:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:36.708 19:37:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:17:36.708 19:37:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:36.708 19:37:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:17:36.708 19:37:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:36.708 19:37:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:36.708 rmmod nvme_tcp 00:17:36.708 rmmod nvme_fabrics 00:17:36.708 rmmod nvme_keyring 00:17:36.708 19:37:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:36.708 19:37:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:17:36.708 19:37:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:17:36.708 19:37:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 74363 ']' 00:17:36.708 19:37:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 74363 00:17:36.708 19:37:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 74363 ']' 00:17:36.708 19:37:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 74363 00:17:36.708 19:37:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:17:36.708 19:37:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:36.708 19:37:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74363 00:17:36.708 killing process with pid 74363 00:17:36.708 19:37:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:36.708 19:37:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:36.708 19:37:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74363' 00:17:36.708 19:37:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 74363 00:17:36.708 19:37:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 74363 00:17:37.642 19:37:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:37.642 19:37:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:37.642 19:37:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:37.642 19:37:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:17:37.642 19:37:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:17:37.642 19:37:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:37.642 19:37:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:17:37.643 19:37:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:37.643 19:37:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:37.643 19:37:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:37.643 19:37:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:37.643 19:37:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:37.643 19:37:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:37.643 19:37:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:37.643 19:37:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:37.643 19:37:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:37.643 19:37:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:37.643 19:37:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:37.643 19:37:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:37.643 19:37:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:37.643 19:37:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:37.643 19:37:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:37.643 19:37:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:37.643 19:37:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:37.643 19:37:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:37.643 19:37:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:37.643 19:37:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@300 -- # return 0 00:17:37.643 00:17:37.643 real 0m14.376s 00:17:37.643 user 0m51.449s 00:17:37.643 sys 0m4.163s 00:17:37.643 ************************************ 00:17:37.643 END TEST nvmf_perf 00:17:37.643 ************************************ 00:17:37.643 19:37:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:37.643 19:37:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:37.643 19:37:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:17:37.643 19:37:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:37.643 19:37:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:37.643 19:37:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.902 ************************************ 00:17:37.902 START TEST nvmf_fio_host 00:17:37.902 ************************************ 00:17:37.902 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:17:37.902 * Looking for test storage... 00:17:37.902 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:37.902 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:37.902 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lcov --version 00:17:37.902 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:37.902 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:37.902 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:37.902 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:37.902 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:37.902 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:17:37.902 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:17:37.902 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:17:37.902 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:17:37.902 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:17:37.902 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:17:37.902 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:17:37.902 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:37.903 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:17:37.903 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:17:37.903 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:37.903 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:37.903 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:17:37.903 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:17:37.903 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:37.903 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:17:37.903 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:17:37.903 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:17:37.903 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:17:37.903 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:37.903 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:17:37.903 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:17:37.903 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:37.903 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:37.903 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:17:37.903 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:37.903 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:37.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:37.903 --rc genhtml_branch_coverage=1 00:17:37.903 --rc genhtml_function_coverage=1 00:17:37.903 --rc genhtml_legend=1 00:17:37.903 --rc geninfo_all_blocks=1 00:17:37.903 --rc geninfo_unexecuted_blocks=1 00:17:37.903 00:17:37.903 ' 00:17:37.903 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:37.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:37.903 --rc genhtml_branch_coverage=1 00:17:37.903 --rc genhtml_function_coverage=1 00:17:37.903 --rc genhtml_legend=1 00:17:37.903 --rc geninfo_all_blocks=1 00:17:37.903 --rc geninfo_unexecuted_blocks=1 00:17:37.903 00:17:37.903 ' 00:17:37.903 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:37.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:37.903 --rc genhtml_branch_coverage=1 00:17:37.903 --rc genhtml_function_coverage=1 00:17:37.903 --rc genhtml_legend=1 00:17:37.903 --rc geninfo_all_blocks=1 00:17:37.903 --rc geninfo_unexecuted_blocks=1 00:17:37.903 00:17:37.903 ' 00:17:37.903 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:37.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:37.903 --rc genhtml_branch_coverage=1 00:17:37.903 --rc genhtml_function_coverage=1 00:17:37.903 --rc genhtml_legend=1 00:17:37.903 --rc geninfo_all_blocks=1 00:17:37.903 --rc geninfo_unexecuted_blocks=1 00:17:37.903 00:17:37.903 ' 00:17:37.903 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:37.903 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:17:37.903 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:37.903 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:37.903 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:37.903 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.903 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.903 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.903 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:17:37.903 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.903 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:37.903 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:17:37.903 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:37.903 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:37.903 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:37.903 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:37.903 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:37.903 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:37.903 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:37.903 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:37.903 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:37.903 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:37.903 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:17:37.903 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:17:37.903 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:37.903 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:37.903 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:37.903 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:37.903 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:37.903 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:17:37.903 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:37.903 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:37.903 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:37.903 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.903 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.903 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.903 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:17:37.903 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.903 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:17:37.903 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:37.903 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:37.903 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:37.904 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:37.904 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:37.904 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:37.904 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:37.904 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:37.904 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:37.904 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:37.904 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:37.904 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:17:37.904 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:37.904 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:37.904 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:37.904 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:37.904 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:37.904 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:37.904 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:37.904 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:37.904 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:37.904 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:37.904 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:37.904 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:37.904 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:37.904 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:37.904 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:37.904 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:37.904 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:37.904 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:37.904 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:37.904 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:37.904 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:37.904 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:37.904 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:37.904 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:37.904 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:37.904 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:37.904 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:37.904 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:37.904 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:37.904 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:37.904 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:37.904 Cannot find device "nvmf_init_br" 00:17:37.904 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:17:37.904 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:38.163 Cannot find device "nvmf_init_br2" 00:17:38.163 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:17:38.163 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:38.163 Cannot find device "nvmf_tgt_br" 00:17:38.163 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # true 00:17:38.163 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:38.163 Cannot find device "nvmf_tgt_br2" 00:17:38.163 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # true 00:17:38.163 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:38.163 Cannot find device "nvmf_init_br" 00:17:38.163 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # true 00:17:38.163 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:38.163 Cannot find device "nvmf_init_br2" 00:17:38.163 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # true 00:17:38.163 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:38.163 Cannot find device "nvmf_tgt_br" 00:17:38.163 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # true 00:17:38.163 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:38.163 Cannot find device "nvmf_tgt_br2" 00:17:38.163 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # true 00:17:38.163 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:38.163 Cannot find device "nvmf_br" 00:17:38.163 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # true 00:17:38.163 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:38.163 Cannot find device "nvmf_init_if" 00:17:38.163 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # true 00:17:38.163 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:38.163 Cannot find device "nvmf_init_if2" 00:17:38.163 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # true 00:17:38.163 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:38.163 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:38.163 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # true 00:17:38.163 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:38.163 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:38.163 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # true 00:17:38.163 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:38.163 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:38.163 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:38.163 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:38.163 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:38.163 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:38.163 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:38.163 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:38.163 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:38.163 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:38.163 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:38.163 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:38.163 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:38.163 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:38.163 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:38.163 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:38.163 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:38.163 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:38.163 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:38.163 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:38.424 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:38.424 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:38.424 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:38.424 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:38.424 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:38.424 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:38.424 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:38.424 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:38.424 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:38.424 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:38.424 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:38.424 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:38.424 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:38.424 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:38.424 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:17:38.424 00:17:38.424 --- 10.0.0.3 ping statistics --- 00:17:38.424 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:38.424 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:17:38.424 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:38.424 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:38.424 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.040 ms 00:17:38.424 00:17:38.424 --- 10.0.0.4 ping statistics --- 00:17:38.424 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:38.424 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:17:38.424 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:38.424 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:38.424 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:17:38.424 00:17:38.424 --- 10.0.0.1 ping statistics --- 00:17:38.424 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:38.424 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:17:38.424 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:38.424 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:38.424 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.046 ms 00:17:38.424 00:17:38.424 --- 10.0.0.2 ping statistics --- 00:17:38.424 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:38.424 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:17:38.424 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:38.424 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@461 -- # return 0 00:17:38.424 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:38.424 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:38.424 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:38.424 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:38.424 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:38.424 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:38.424 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:38.424 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:17:38.424 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:17:38.424 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:38.424 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.424 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=74824 00:17:38.424 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:38.424 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:38.424 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 74824 00:17:38.424 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 74824 ']' 00:17:38.424 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:38.424 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:38.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:38.424 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:38.424 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:38.424 19:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.424 [2024-12-05 19:37:31.791661] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:17:38.424 [2024-12-05 19:37:31.791774] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:38.684 [2024-12-05 19:37:31.947951] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:38.684 [2024-12-05 19:37:32.007917] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:38.684 [2024-12-05 19:37:32.007977] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:38.684 [2024-12-05 19:37:32.007997] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:38.684 [2024-12-05 19:37:32.008007] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:38.684 [2024-12-05 19:37:32.008017] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:38.684 [2024-12-05 19:37:32.009405] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:38.684 [2024-12-05 19:37:32.009536] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:38.684 [2024-12-05 19:37:32.009684] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:38.684 [2024-12-05 19:37:32.009691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:38.684 [2024-12-05 19:37:32.064417] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:38.944 19:37:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:38.944 19:37:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:17:38.944 19:37:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:39.203 [2024-12-05 19:37:32.426863] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:39.203 19:37:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:17:39.203 19:37:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:39.203 19:37:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.203 19:37:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:39.463 Malloc1 00:17:39.463 19:37:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:39.722 19:37:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:40.289 19:37:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:40.289 [2024-12-05 19:37:33.661599] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:40.289 19:37:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:17:40.548 19:37:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:17:40.548 19:37:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:17:40.548 19:37:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:17:40.548 19:37:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:40.548 19:37:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:40.548 19:37:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:40.548 19:37:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:40.548 19:37:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:17:40.548 19:37:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:40.548 19:37:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:40.548 19:37:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:17:40.548 19:37:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:40.548 19:37:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:40.548 19:37:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:17:40.548 19:37:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:17:40.548 19:37:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:40.548 19:37:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:17:40.549 19:37:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:40.549 19:37:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:40.549 19:37:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:17:40.549 19:37:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:17:40.549 19:37:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:17:40.549 19:37:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:17:40.807 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:17:40.807 fio-3.35 00:17:40.807 Starting 1 thread 00:17:43.342 00:17:43.342 test: (groupid=0, jobs=1): err= 0: pid=74899: Thu Dec 5 19:37:36 2024 00:17:43.342 read: IOPS=8764, BW=34.2MiB/s (35.9MB/s)(68.7MiB/2007msec) 00:17:43.342 slat (nsec): min=1927, max=323463, avg=2563.02, stdev=3445.30 00:17:43.342 clat (usec): min=2608, max=13447, avg=7602.39, stdev=550.37 00:17:43.342 lat (usec): min=2665, max=13449, avg=7604.96, stdev=550.09 00:17:43.342 clat percentiles (usec): 00:17:43.342 | 1.00th=[ 6456], 5.00th=[ 6783], 10.00th=[ 6980], 20.00th=[ 7177], 00:17:43.342 | 30.00th=[ 7373], 40.00th=[ 7504], 50.00th=[ 7570], 60.00th=[ 7701], 00:17:43.342 | 70.00th=[ 7832], 80.00th=[ 7963], 90.00th=[ 8225], 95.00th=[ 8455], 00:17:43.342 | 99.00th=[ 8848], 99.50th=[ 9634], 99.90th=[11469], 99.95th=[12911], 00:17:43.342 | 99.99th=[13435] 00:17:43.342 bw ( KiB/s): min=34320, max=36112, per=100.00%, avg=35076.00, stdev=762.89, samples=4 00:17:43.342 iops : min= 8580, max= 9028, avg=8769.00, stdev=190.72, samples=4 00:17:43.342 write: IOPS=8774, BW=34.3MiB/s (35.9MB/s)(68.8MiB/2007msec); 0 zone resets 00:17:43.342 slat (usec): min=2, max=239, avg= 2.63, stdev= 2.42 00:17:43.342 clat (usec): min=2470, max=13282, avg=6930.24, stdev=523.71 00:17:43.342 lat (usec): min=2484, max=13284, avg=6932.87, stdev=523.53 00:17:43.342 clat percentiles (usec): 00:17:43.342 | 1.00th=[ 5866], 5.00th=[ 6259], 10.00th=[ 6390], 20.00th=[ 6587], 00:17:43.342 | 30.00th=[ 6718], 40.00th=[ 6849], 50.00th=[ 6915], 60.00th=[ 7046], 00:17:43.342 | 70.00th=[ 7111], 80.00th=[ 7242], 90.00th=[ 7439], 95.00th=[ 7635], 00:17:43.342 | 99.00th=[ 8094], 99.50th=[ 8979], 99.90th=[11469], 99.95th=[12911], 00:17:43.342 | 99.99th=[13304] 00:17:43.342 bw ( KiB/s): min=34888, max=35272, per=99.93%, avg=35074.00, stdev=168.74, samples=4 00:17:43.342 iops : min= 8722, max= 8818, avg=8768.50, stdev=42.19, samples=4 00:17:43.342 lat (msec) : 4=0.08%, 10=99.61%, 20=0.31% 00:17:43.342 cpu : usr=71.73%, sys=21.73%, ctx=7, majf=0, minf=7 00:17:43.342 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:17:43.342 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:43.342 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:43.342 issued rwts: total=17591,17611,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:43.342 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:43.342 00:17:43.342 Run status group 0 (all jobs): 00:17:43.342 READ: bw=34.2MiB/s (35.9MB/s), 34.2MiB/s-34.2MiB/s (35.9MB/s-35.9MB/s), io=68.7MiB (72.1MB), run=2007-2007msec 00:17:43.342 WRITE: bw=34.3MiB/s (35.9MB/s), 34.3MiB/s-34.3MiB/s (35.9MB/s-35.9MB/s), io=68.8MiB (72.1MB), run=2007-2007msec 00:17:43.342 19:37:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:17:43.342 19:37:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:17:43.342 19:37:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:43.342 19:37:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:43.342 19:37:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:43.342 19:37:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:43.342 19:37:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:17:43.342 19:37:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:43.342 19:37:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:43.342 19:37:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:17:43.342 19:37:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:43.342 19:37:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:43.342 19:37:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:17:43.342 19:37:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:17:43.342 19:37:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:43.342 19:37:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:43.342 19:37:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:17:43.342 19:37:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:43.342 19:37:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:17:43.342 19:37:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:17:43.342 19:37:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:17:43.342 19:37:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:17:43.342 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:17:43.342 fio-3.35 00:17:43.342 Starting 1 thread 00:17:45.893 00:17:45.893 test: (groupid=0, jobs=1): err= 0: pid=74950: Thu Dec 5 19:37:38 2024 00:17:45.893 read: IOPS=8000, BW=125MiB/s (131MB/s)(251MiB/2009msec) 00:17:45.893 slat (usec): min=2, max=118, avg= 3.83, stdev= 2.31 00:17:45.893 clat (usec): min=2391, max=18241, avg=8833.95, stdev=2475.42 00:17:45.893 lat (usec): min=2394, max=18244, avg=8837.78, stdev=2475.47 00:17:45.893 clat percentiles (usec): 00:17:45.893 | 1.00th=[ 4293], 5.00th=[ 5145], 10.00th=[ 5669], 20.00th=[ 6587], 00:17:45.893 | 30.00th=[ 7373], 40.00th=[ 7963], 50.00th=[ 8717], 60.00th=[ 9372], 00:17:45.893 | 70.00th=[10159], 80.00th=[10683], 90.00th=[12125], 95.00th=[13435], 00:17:45.893 | 99.00th=[15270], 99.50th=[15795], 99.90th=[17171], 99.95th=[17433], 00:17:45.893 | 99.99th=[17433] 00:17:45.893 bw ( KiB/s): min=59360, max=70816, per=51.83%, avg=66352.00, stdev=5412.04, samples=4 00:17:45.893 iops : min= 3710, max= 4426, avg=4147.00, stdev=338.25, samples=4 00:17:45.893 write: IOPS=4832, BW=75.5MiB/s (79.2MB/s)(136MiB/1800msec); 0 zone resets 00:17:45.893 slat (usec): min=31, max=325, avg=39.70, stdev= 9.03 00:17:45.893 clat (usec): min=4778, max=21416, avg=12367.55, stdev=2362.62 00:17:45.893 lat (usec): min=4811, max=21453, avg=12407.25, stdev=2363.91 00:17:45.893 clat percentiles (usec): 00:17:45.893 | 1.00th=[ 7504], 5.00th=[ 8717], 10.00th=[ 9503], 20.00th=[10290], 00:17:45.893 | 30.00th=[10945], 40.00th=[11600], 50.00th=[12125], 60.00th=[12911], 00:17:45.893 | 70.00th=[13566], 80.00th=[14484], 90.00th=[15533], 95.00th=[16450], 00:17:45.893 | 99.00th=[18220], 99.50th=[19006], 99.90th=[19792], 99.95th=[20055], 00:17:45.893 | 99.99th=[21365] 00:17:45.893 bw ( KiB/s): min=62112, max=73728, per=89.39%, avg=69112.00, stdev=5586.57, samples=4 00:17:45.893 iops : min= 3882, max= 4608, avg=4319.50, stdev=349.16, samples=4 00:17:45.893 lat (msec) : 4=0.29%, 10=48.95%, 20=50.74%, 50=0.02% 00:17:45.893 cpu : usr=83.07%, sys=12.85%, ctx=15, majf=0, minf=14 00:17:45.893 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:17:45.893 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:45.893 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:45.893 issued rwts: total=16073,8698,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:45.893 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:45.893 00:17:45.893 Run status group 0 (all jobs): 00:17:45.893 READ: bw=125MiB/s (131MB/s), 125MiB/s-125MiB/s (131MB/s-131MB/s), io=251MiB (263MB), run=2009-2009msec 00:17:45.893 WRITE: bw=75.5MiB/s (79.2MB/s), 75.5MiB/s-75.5MiB/s (79.2MB/s-79.2MB/s), io=136MiB (143MB), run=1800-1800msec 00:17:45.893 19:37:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:45.893 19:37:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:17:45.893 19:37:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:17:45.893 19:37:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:17:45.893 19:37:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:17:45.893 19:37:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:45.893 19:37:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:17:46.153 19:37:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:46.153 19:37:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:17:46.153 19:37:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:46.153 19:37:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:46.153 rmmod nvme_tcp 00:17:46.153 rmmod nvme_fabrics 00:17:46.153 rmmod nvme_keyring 00:17:46.153 19:37:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:46.153 19:37:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:17:46.153 19:37:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:17:46.153 19:37:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 74824 ']' 00:17:46.153 19:37:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 74824 00:17:46.153 19:37:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 74824 ']' 00:17:46.153 19:37:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 74824 00:17:46.153 19:37:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:17:46.153 19:37:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:46.153 19:37:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74824 00:17:46.153 killing process with pid 74824 00:17:46.153 19:37:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:46.153 19:37:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:46.153 19:37:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74824' 00:17:46.153 19:37:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 74824 00:17:46.153 19:37:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 74824 00:17:46.412 19:37:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:46.412 19:37:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:46.412 19:37:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:46.412 19:37:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:17:46.412 19:37:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:17:46.412 19:37:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:46.412 19:37:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:17:46.412 19:37:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:46.412 19:37:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:46.412 19:37:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:46.412 19:37:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:46.412 19:37:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:46.412 19:37:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:46.412 19:37:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:46.412 19:37:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:46.412 19:37:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:46.412 19:37:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:46.412 19:37:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:46.412 19:37:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:46.412 19:37:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:46.671 19:37:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:46.671 19:37:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:46.671 19:37:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:46.671 19:37:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:46.671 19:37:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:46.671 19:37:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:46.671 19:37:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@300 -- # return 0 00:17:46.671 ************************************ 00:17:46.671 END TEST nvmf_fio_host 00:17:46.671 ************************************ 00:17:46.671 00:17:46.671 real 0m8.871s 00:17:46.671 user 0m35.110s 00:17:46.671 sys 0m2.458s 00:17:46.671 19:37:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:46.671 19:37:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.671 19:37:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:17:46.671 19:37:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:46.671 19:37:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:46.671 19:37:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.671 ************************************ 00:17:46.671 START TEST nvmf_failover 00:17:46.671 ************************************ 00:17:46.671 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:17:46.671 * Looking for test storage... 00:17:46.671 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:46.671 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:46.671 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lcov --version 00:17:46.671 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:46.931 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:46.931 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:46.931 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:46.931 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:46.931 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:17:46.931 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:17:46.931 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:17:46.931 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:17:46.931 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:17:46.931 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:17:46.931 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:17:46.931 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:46.931 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:17:46.931 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:17:46.931 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:46.931 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:46.931 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:17:46.931 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:17:46.931 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:46.931 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:17:46.931 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:17:46.931 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:17:46.931 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:17:46.931 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:46.931 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:17:46.931 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:17:46.931 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:46.931 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:46.931 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:17:46.931 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:46.931 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:46.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:46.931 --rc genhtml_branch_coverage=1 00:17:46.931 --rc genhtml_function_coverage=1 00:17:46.931 --rc genhtml_legend=1 00:17:46.931 --rc geninfo_all_blocks=1 00:17:46.931 --rc geninfo_unexecuted_blocks=1 00:17:46.931 00:17:46.931 ' 00:17:46.931 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:46.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:46.931 --rc genhtml_branch_coverage=1 00:17:46.931 --rc genhtml_function_coverage=1 00:17:46.931 --rc genhtml_legend=1 00:17:46.931 --rc geninfo_all_blocks=1 00:17:46.931 --rc geninfo_unexecuted_blocks=1 00:17:46.931 00:17:46.931 ' 00:17:46.931 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:46.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:46.931 --rc genhtml_branch_coverage=1 00:17:46.931 --rc genhtml_function_coverage=1 00:17:46.931 --rc genhtml_legend=1 00:17:46.931 --rc geninfo_all_blocks=1 00:17:46.931 --rc geninfo_unexecuted_blocks=1 00:17:46.931 00:17:46.931 ' 00:17:46.931 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:46.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:46.931 --rc genhtml_branch_coverage=1 00:17:46.931 --rc genhtml_function_coverage=1 00:17:46.931 --rc genhtml_legend=1 00:17:46.931 --rc geninfo_all_blocks=1 00:17:46.931 --rc geninfo_unexecuted_blocks=1 00:17:46.931 00:17:46.931 ' 00:17:46.931 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:46.931 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:17:46.931 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:46.931 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:46.931 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:46.931 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:46.931 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:46.931 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:46.931 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:46.931 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:46.931 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:46.931 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:46.931 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:17:46.931 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:17:46.931 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:46.931 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:46.931 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:46.931 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:46.931 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:46.931 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:17:46.931 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:46.931 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:46.931 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:46.931 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:46.931 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:46.931 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:46.931 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:17:46.931 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:46.931 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:17:46.931 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:46.931 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:46.932 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:46.932 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:46.932 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:46.932 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:46.932 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:46.932 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:46.932 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:46.932 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:46.932 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:46.932 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:46.932 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:46.932 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:46.932 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:17:46.932 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:46.932 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:46.932 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:46.932 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:46.932 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:46.932 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:46.932 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:46.932 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:46.932 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:46.932 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:46.932 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:46.932 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:46.932 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:46.932 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:46.932 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:46.932 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:46.932 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:46.932 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:46.932 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:46.932 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:46.932 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:46.932 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:46.932 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:46.932 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:46.932 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:46.932 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:46.932 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:46.932 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:46.932 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:46.932 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:46.932 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:46.932 Cannot find device "nvmf_init_br" 00:17:46.932 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # true 00:17:46.932 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:46.932 Cannot find device "nvmf_init_br2" 00:17:46.932 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # true 00:17:46.932 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:46.932 Cannot find device "nvmf_tgt_br" 00:17:46.932 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # true 00:17:46.932 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:46.932 Cannot find device "nvmf_tgt_br2" 00:17:46.932 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # true 00:17:46.932 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:46.932 Cannot find device "nvmf_init_br" 00:17:46.932 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # true 00:17:46.932 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:46.932 Cannot find device "nvmf_init_br2" 00:17:46.932 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # true 00:17:46.932 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:46.932 Cannot find device "nvmf_tgt_br" 00:17:46.932 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # true 00:17:46.932 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:46.932 Cannot find device "nvmf_tgt_br2" 00:17:46.932 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # true 00:17:46.932 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:46.932 Cannot find device "nvmf_br" 00:17:46.932 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # true 00:17:46.932 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:46.932 Cannot find device "nvmf_init_if" 00:17:46.932 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # true 00:17:46.932 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:46.932 Cannot find device "nvmf_init_if2" 00:17:46.932 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # true 00:17:46.932 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:46.932 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:46.932 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # true 00:17:46.932 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:46.932 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:46.932 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # true 00:17:46.932 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:46.932 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:46.932 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:47.194 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:47.194 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:47.194 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:47.194 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:47.194 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:47.194 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:47.194 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:47.194 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:47.194 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:47.194 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:47.194 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:47.194 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:47.194 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:47.194 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:47.194 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:47.194 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:47.194 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:47.194 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:47.194 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:47.194 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:47.194 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:47.194 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:47.194 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:47.194 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:47.194 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:47.194 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:47.194 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:47.194 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:47.194 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:47.194 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:47.194 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:47.194 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:17:47.194 00:17:47.194 --- 10.0.0.3 ping statistics --- 00:17:47.194 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:47.194 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:17:47.194 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:47.194 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:47.194 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:17:47.194 00:17:47.194 --- 10.0.0.4 ping statistics --- 00:17:47.194 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:47.194 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:17:47.194 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:47.194 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:47.194 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:17:47.194 00:17:47.194 --- 10.0.0.1 ping statistics --- 00:17:47.194 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:47.194 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:17:47.194 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:47.194 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:47.194 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:17:47.194 00:17:47.194 --- 10.0.0.2 ping statistics --- 00:17:47.194 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:47.194 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:17:47.194 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:47.194 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@461 -- # return 0 00:17:47.194 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:47.194 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:47.194 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:47.194 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:47.194 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:47.194 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:47.194 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:47.194 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:17:47.194 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:47.194 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:47.194 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:17:47.194 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=75210 00:17:47.194 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:17:47.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:47.194 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 75210 00:17:47.194 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 75210 ']' 00:17:47.194 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:47.194 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:47.194 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:47.194 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:47.194 19:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:17:47.453 [2024-12-05 19:37:40.649841] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:17:47.453 [2024-12-05 19:37:40.650330] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:47.453 [2024-12-05 19:37:40.804306] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:47.453 [2024-12-05 19:37:40.872930] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:47.453 [2024-12-05 19:37:40.873267] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:47.453 [2024-12-05 19:37:40.873540] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:47.453 [2024-12-05 19:37:40.873714] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:47.453 [2024-12-05 19:37:40.873729] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:47.453 [2024-12-05 19:37:40.875209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:47.453 [2024-12-05 19:37:40.875285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:47.453 [2024-12-05 19:37:40.875288] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:47.712 [2024-12-05 19:37:40.933800] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:47.712 19:37:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:47.712 19:37:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:17:47.712 19:37:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:47.712 19:37:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:47.712 19:37:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:17:47.712 19:37:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:47.712 19:37:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:47.971 [2024-12-05 19:37:41.324627] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:47.971 19:37:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:17:48.228 Malloc0 00:17:48.228 19:37:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:48.486 19:37:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:48.744 19:37:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:49.001 [2024-12-05 19:37:42.402283] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:49.001 19:37:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:17:49.258 [2024-12-05 19:37:42.658463] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:17:49.258 19:37:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:17:49.516 [2024-12-05 19:37:42.914747] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:17:49.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:49.516 19:37:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=75266 00:17:49.516 19:37:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:17:49.516 19:37:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:49.516 19:37:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 75266 /var/tmp/bdevperf.sock 00:17:49.516 19:37:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 75266 ']' 00:17:49.516 19:37:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:49.516 19:37:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:49.516 19:37:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:49.516 19:37:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:49.516 19:37:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:17:50.889 19:37:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:50.889 19:37:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:17:50.889 19:37:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:17:50.889 NVMe0n1 00:17:50.889 19:37:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:17:51.456 00:17:51.456 19:37:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=75288 00:17:51.456 19:37:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:51.456 19:37:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:17:52.391 19:37:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:52.649 19:37:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:17:55.938 19:37:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:17:55.938 00:17:56.197 19:37:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:17:56.468 19:37:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:17:59.755 19:37:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:59.755 [2024-12-05 19:37:52.943656] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:59.755 19:37:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:18:00.691 19:37:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:18:00.949 19:37:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 75288 00:18:07.517 { 00:18:07.517 "results": [ 00:18:07.517 { 00:18:07.517 "job": "NVMe0n1", 00:18:07.517 "core_mask": "0x1", 00:18:07.517 "workload": "verify", 00:18:07.517 "status": "finished", 00:18:07.517 "verify_range": { 00:18:07.517 "start": 0, 00:18:07.517 "length": 16384 00:18:07.517 }, 00:18:07.517 "queue_depth": 128, 00:18:07.517 "io_size": 4096, 00:18:07.517 "runtime": 15.008995, 00:18:07.517 "iops": 9112.13575592503, 00:18:07.517 "mibps": 35.59428029658215, 00:18:07.517 "io_failed": 3429, 00:18:07.517 "io_timeout": 0, 00:18:07.517 "avg_latency_us": 13671.640076647582, 00:18:07.517 "min_latency_us": 659.0836363636364, 00:18:07.517 "max_latency_us": 15371.17090909091 00:18:07.517 } 00:18:07.517 ], 00:18:07.517 "core_count": 1 00:18:07.517 } 00:18:07.517 19:37:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 75266 00:18:07.517 19:37:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 75266 ']' 00:18:07.517 19:37:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 75266 00:18:07.517 19:37:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:18:07.517 19:37:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:07.517 19:37:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75266 00:18:07.517 killing process with pid 75266 00:18:07.517 19:37:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:07.518 19:37:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:07.518 19:37:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75266' 00:18:07.518 19:37:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 75266 00:18:07.518 19:37:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 75266 00:18:07.518 19:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:07.518 [2024-12-05 19:37:42.984821] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:18:07.518 [2024-12-05 19:37:42.984923] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75266 ] 00:18:07.518 [2024-12-05 19:37:43.134382] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:07.518 [2024-12-05 19:37:43.196296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:07.518 [2024-12-05 19:37:43.254538] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:07.518 Running I/O for 15 seconds... 00:18:07.518 6948.00 IOPS, 27.14 MiB/s [2024-12-05T19:38:00.955Z] [2024-12-05 19:37:45.919030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:63648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.518 [2024-12-05 19:37:45.919097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.518 [2024-12-05 19:37:45.919142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:63776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.518 [2024-12-05 19:37:45.919160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.518 [2024-12-05 19:37:45.919177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.518 [2024-12-05 19:37:45.919192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.518 [2024-12-05 19:37:45.919208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:63792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.518 [2024-12-05 19:37:45.919222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.518 [2024-12-05 19:37:45.919238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:63800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.518 [2024-12-05 19:37:45.919252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.518 [2024-12-05 19:37:45.919268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:63808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.518 [2024-12-05 19:37:45.919282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.518 [2024-12-05 19:37:45.919297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:63816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.518 [2024-12-05 19:37:45.919312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.518 [2024-12-05 19:37:45.919328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:63824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.518 [2024-12-05 19:37:45.919342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.518 [2024-12-05 19:37:45.919357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:63832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.518 [2024-12-05 19:37:45.919371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.518 [2024-12-05 19:37:45.919387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:63840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.518 [2024-12-05 19:37:45.919401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.518 [2024-12-05 19:37:45.919417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:63848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.518 [2024-12-05 19:37:45.919463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.518 [2024-12-05 19:37:45.919481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:63856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.518 [2024-12-05 19:37:45.919495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.518 [2024-12-05 19:37:45.919511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:63864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.518 [2024-12-05 19:37:45.919525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.518 [2024-12-05 19:37:45.919544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:63872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.518 [2024-12-05 19:37:45.919566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.518 [2024-12-05 19:37:45.919582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:63880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.518 [2024-12-05 19:37:45.919596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.518 [2024-12-05 19:37:45.919612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:63888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.518 [2024-12-05 19:37:45.919626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.518 [2024-12-05 19:37:45.919641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:63896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.518 [2024-12-05 19:37:45.919657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.518 [2024-12-05 19:37:45.919673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:63904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.518 [2024-12-05 19:37:45.919687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.518 [2024-12-05 19:37:45.919714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:63912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.518 [2024-12-05 19:37:45.919728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.518 [2024-12-05 19:37:45.919743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:63920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.518 [2024-12-05 19:37:45.919756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.518 [2024-12-05 19:37:45.919801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:63928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.518 [2024-12-05 19:37:45.919817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.518 [2024-12-05 19:37:45.919833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:63936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.518 [2024-12-05 19:37:45.919848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.518 [2024-12-05 19:37:45.919863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:63944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.518 [2024-12-05 19:37:45.919877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.518 [2024-12-05 19:37:45.919902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:63952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.518 [2024-12-05 19:37:45.919917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.518 [2024-12-05 19:37:45.919933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:63960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.518 [2024-12-05 19:37:45.919947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.518 [2024-12-05 19:37:45.919962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:63968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.518 [2024-12-05 19:37:45.919977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.518 [2024-12-05 19:37:45.919992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:63976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.518 [2024-12-05 19:37:45.920007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.518 [2024-12-05 19:37:45.920022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:63984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.518 [2024-12-05 19:37:45.920036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.518 [2024-12-05 19:37:45.920051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:63992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.518 [2024-12-05 19:37:45.920065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.518 [2024-12-05 19:37:45.920091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:64000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.518 [2024-12-05 19:37:45.920110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.518 [2024-12-05 19:37:45.920137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:64008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.518 [2024-12-05 19:37:45.920156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.518 [2024-12-05 19:37:45.920172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:64016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.518 [2024-12-05 19:37:45.920186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.518 [2024-12-05 19:37:45.920202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:64024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.518 [2024-12-05 19:37:45.920216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.518 [2024-12-05 19:37:45.920233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:64032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.518 [2024-12-05 19:37:45.920248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.518 [2024-12-05 19:37:45.920263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:64040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.519 [2024-12-05 19:37:45.920278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.519 [2024-12-05 19:37:45.920293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:64048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.519 [2024-12-05 19:37:45.920307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.519 [2024-12-05 19:37:45.920330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:64056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.519 [2024-12-05 19:37:45.920345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.519 [2024-12-05 19:37:45.920361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:64064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.519 [2024-12-05 19:37:45.920375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.519 [2024-12-05 19:37:45.920390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:64072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.519 [2024-12-05 19:37:45.920403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.519 [2024-12-05 19:37:45.920419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:64080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.519 [2024-12-05 19:37:45.920433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.519 [2024-12-05 19:37:45.920448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:64088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.519 [2024-12-05 19:37:45.920462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.519 [2024-12-05 19:37:45.920477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:64096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.519 [2024-12-05 19:37:45.920491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.519 [2024-12-05 19:37:45.920507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:64104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.519 [2024-12-05 19:37:45.920535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.519 [2024-12-05 19:37:45.920550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:64112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.519 [2024-12-05 19:37:45.920564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.519 [2024-12-05 19:37:45.920578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:64120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.519 [2024-12-05 19:37:45.920591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.519 [2024-12-05 19:37:45.920607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:64128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.519 [2024-12-05 19:37:45.920625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.519 [2024-12-05 19:37:45.920640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:64136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.519 [2024-12-05 19:37:45.920653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.519 [2024-12-05 19:37:45.920668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:64144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.519 [2024-12-05 19:37:45.920682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.519 [2024-12-05 19:37:45.920697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:64152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.519 [2024-12-05 19:37:45.920717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.519 [2024-12-05 19:37:45.920733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:64160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.519 [2024-12-05 19:37:45.920747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.519 [2024-12-05 19:37:45.920763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:64168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.519 [2024-12-05 19:37:45.920776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.519 [2024-12-05 19:37:45.920791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:64176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.519 [2024-12-05 19:37:45.920821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.519 [2024-12-05 19:37:45.920837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:64184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.519 [2024-12-05 19:37:45.920851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.519 [2024-12-05 19:37:45.920867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:64192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.519 [2024-12-05 19:37:45.920881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.519 [2024-12-05 19:37:45.920897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:64200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.519 [2024-12-05 19:37:45.920910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.519 [2024-12-05 19:37:45.920926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:64208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.519 [2024-12-05 19:37:45.920940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.519 [2024-12-05 19:37:45.920955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:64216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.519 [2024-12-05 19:37:45.920970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.519 [2024-12-05 19:37:45.920985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:64224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.519 [2024-12-05 19:37:45.920999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.519 [2024-12-05 19:37:45.921015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:64232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.519 [2024-12-05 19:37:45.921029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.519 [2024-12-05 19:37:45.921044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:64240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.519 [2024-12-05 19:37:45.921058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.519 [2024-12-05 19:37:45.921073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:64248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.519 [2024-12-05 19:37:45.921087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.519 [2024-12-05 19:37:45.921109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:64256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.519 [2024-12-05 19:37:45.921128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.519 [2024-12-05 19:37:45.921155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:64264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.519 [2024-12-05 19:37:45.921172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.519 [2024-12-05 19:37:45.921187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:64272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.519 [2024-12-05 19:37:45.921202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.519 [2024-12-05 19:37:45.921218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:64280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.519 [2024-12-05 19:37:45.921231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.519 [2024-12-05 19:37:45.921247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:64288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.519 [2024-12-05 19:37:45.921261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.519 [2024-12-05 19:37:45.921276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:64296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.519 [2024-12-05 19:37:45.921291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.519 [2024-12-05 19:37:45.921306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:64304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.519 [2024-12-05 19:37:45.921320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.519 [2024-12-05 19:37:45.921335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:64312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.519 [2024-12-05 19:37:45.921350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.519 [2024-12-05 19:37:45.921365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:64320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.519 [2024-12-05 19:37:45.921379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.519 [2024-12-05 19:37:45.921395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:64328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.519 [2024-12-05 19:37:45.921409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.519 [2024-12-05 19:37:45.921424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:64336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.519 [2024-12-05 19:37:45.921438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.519 [2024-12-05 19:37:45.921454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:64344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.519 [2024-12-05 19:37:45.921468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.519 [2024-12-05 19:37:45.921483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:64352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.519 [2024-12-05 19:37:45.921503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.519 [2024-12-05 19:37:45.921520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:64360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.520 [2024-12-05 19:37:45.921534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.520 [2024-12-05 19:37:45.921549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:64368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.520 [2024-12-05 19:37:45.921564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.520 [2024-12-05 19:37:45.921579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:64376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.520 [2024-12-05 19:37:45.921593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.520 [2024-12-05 19:37:45.921609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:64384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.520 [2024-12-05 19:37:45.921628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.520 [2024-12-05 19:37:45.921645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:64392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.520 [2024-12-05 19:37:45.921659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.520 [2024-12-05 19:37:45.921674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:64400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.520 [2024-12-05 19:37:45.921688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.520 [2024-12-05 19:37:45.921704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:64408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.520 [2024-12-05 19:37:45.921718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.520 [2024-12-05 19:37:45.921733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:64416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.520 [2024-12-05 19:37:45.921748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.520 [2024-12-05 19:37:45.921763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:64424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.520 [2024-12-05 19:37:45.921777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.520 [2024-12-05 19:37:45.921792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:64432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.520 [2024-12-05 19:37:45.921806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.520 [2024-12-05 19:37:45.921822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:64440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.520 [2024-12-05 19:37:45.921836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.520 [2024-12-05 19:37:45.921851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:64448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.520 [2024-12-05 19:37:45.921865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.520 [2024-12-05 19:37:45.921881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:64456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.520 [2024-12-05 19:37:45.921901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.520 [2024-12-05 19:37:45.921917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:64464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.520 [2024-12-05 19:37:45.921931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.520 [2024-12-05 19:37:45.921946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:64472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.520 [2024-12-05 19:37:45.921960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.520 [2024-12-05 19:37:45.921976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:64480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.520 [2024-12-05 19:37:45.921990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.520 [2024-12-05 19:37:45.922006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:64488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.520 [2024-12-05 19:37:45.922019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.520 [2024-12-05 19:37:45.922034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:64496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.520 [2024-12-05 19:37:45.922049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.520 [2024-12-05 19:37:45.922064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:64504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.520 [2024-12-05 19:37:45.922078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.520 [2024-12-05 19:37:45.922094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:64512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.520 [2024-12-05 19:37:45.922126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.520 [2024-12-05 19:37:45.922146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:64520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.520 [2024-12-05 19:37:45.922162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.520 [2024-12-05 19:37:45.922178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:64528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.520 [2024-12-05 19:37:45.922191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.520 [2024-12-05 19:37:45.922207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:64536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.520 [2024-12-05 19:37:45.922221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.520 [2024-12-05 19:37:45.922237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:64544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.520 [2024-12-05 19:37:45.922250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.520 [2024-12-05 19:37:45.922266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.520 [2024-12-05 19:37:45.922280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.520 [2024-12-05 19:37:45.922303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:64560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.520 [2024-12-05 19:37:45.922319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.520 [2024-12-05 19:37:45.922334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:64568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.520 [2024-12-05 19:37:45.922348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.520 [2024-12-05 19:37:45.922364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:64576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.520 [2024-12-05 19:37:45.922378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.520 [2024-12-05 19:37:45.922393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:64584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.520 [2024-12-05 19:37:45.922407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.520 [2024-12-05 19:37:45.922422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:64592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.520 [2024-12-05 19:37:45.922436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.520 [2024-12-05 19:37:45.922451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:64600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.520 [2024-12-05 19:37:45.922465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.520 [2024-12-05 19:37:45.922481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:64608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.520 [2024-12-05 19:37:45.922495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.520 [2024-12-05 19:37:45.922511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:64616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.520 [2024-12-05 19:37:45.922525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.520 [2024-12-05 19:37:45.922541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:64624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.520 [2024-12-05 19:37:45.922554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.520 [2024-12-05 19:37:45.922570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:64632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.520 [2024-12-05 19:37:45.922584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.520 [2024-12-05 19:37:45.922599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:64640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.520 [2024-12-05 19:37:45.922624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.520 [2024-12-05 19:37:45.922640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:64648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.520 [2024-12-05 19:37:45.922654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.520 [2024-12-05 19:37:45.922670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:63656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.520 [2024-12-05 19:37:45.922690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.520 [2024-12-05 19:37:45.922706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:63664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.520 [2024-12-05 19:37:45.922720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.520 [2024-12-05 19:37:45.922736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:63672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.520 [2024-12-05 19:37:45.922750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.520 [2024-12-05 19:37:45.922765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:63680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.521 [2024-12-05 19:37:45.922780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.521 [2024-12-05 19:37:45.922796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:63688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.521 [2024-12-05 19:37:45.922809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.521 [2024-12-05 19:37:45.922825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:63696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.521 [2024-12-05 19:37:45.922839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.521 [2024-12-05 19:37:45.922854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:63704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.521 [2024-12-05 19:37:45.922868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.521 [2024-12-05 19:37:45.922884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:63712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.521 [2024-12-05 19:37:45.922897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.521 [2024-12-05 19:37:45.922913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:63720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.521 [2024-12-05 19:37:45.922927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.521 [2024-12-05 19:37:45.922942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:63728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.521 [2024-12-05 19:37:45.922956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.521 [2024-12-05 19:37:45.922971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:63736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.521 [2024-12-05 19:37:45.922985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.521 [2024-12-05 19:37:45.923001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:63744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.521 [2024-12-05 19:37:45.923015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.521 [2024-12-05 19:37:45.923031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:63752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.521 [2024-12-05 19:37:45.923046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.521 [2024-12-05 19:37:45.923066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:63760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.521 [2024-12-05 19:37:45.923081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.521 [2024-12-05 19:37:45.923096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:63768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.521 [2024-12-05 19:37:45.923138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.521 [2024-12-05 19:37:45.923158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:64656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.521 [2024-12-05 19:37:45.923172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.521 [2024-12-05 19:37:45.923187] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x628e00 is same with the state(6) to be set 00:18:07.521 [2024-12-05 19:37:45.923203] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:07.521 [2024-12-05 19:37:45.923214] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:07.521 [2024-12-05 19:37:45.923225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64664 len:8 PRP1 0x0 PRP2 0x0 00:18:07.521 [2024-12-05 19:37:45.923240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.521 [2024-12-05 19:37:45.923303] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:18:07.521 [2024-12-05 19:37:45.923361] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:07.521 [2024-12-05 19:37:45.923386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.521 [2024-12-05 19:37:45.923402] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:07.521 [2024-12-05 19:37:45.923415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.521 [2024-12-05 19:37:45.923430] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:07.521 [2024-12-05 19:37:45.923443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.521 [2024-12-05 19:37:45.923457] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:07.521 [2024-12-05 19:37:45.923471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.521 [2024-12-05 19:37:45.923485] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:18:07.521 [2024-12-05 19:37:45.927386] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:18:07.521 [2024-12-05 19:37:45.927441] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b9c60 (9): Bad file descriptor 00:18:07.521 [2024-12-05 19:37:45.956107] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:18:07.521 7825.50 IOPS, 30.57 MiB/s [2024-12-05T19:38:00.958Z] 8331.67 IOPS, 32.55 MiB/s [2024-12-05T19:38:00.958Z] 8597.25 IOPS, 33.58 MiB/s [2024-12-05T19:38:00.958Z] [2024-12-05 19:37:49.642659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:86744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.521 [2024-12-05 19:37:49.642736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.521 [2024-12-05 19:37:49.642812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.521 [2024-12-05 19:37:49.642831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.521 [2024-12-05 19:37:49.642848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:86760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.521 [2024-12-05 19:37:49.642863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.521 [2024-12-05 19:37:49.642878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:86768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.521 [2024-12-05 19:37:49.642893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.521 [2024-12-05 19:37:49.642908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:86776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.521 [2024-12-05 19:37:49.642922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.521 [2024-12-05 19:37:49.642938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:86784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.521 [2024-12-05 19:37:49.642952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.521 [2024-12-05 19:37:49.642968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:86792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.521 [2024-12-05 19:37:49.642982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.521 [2024-12-05 19:37:49.642997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:86800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.521 [2024-12-05 19:37:49.643011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.521 [2024-12-05 19:37:49.643027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:86808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.521 [2024-12-05 19:37:49.643041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.521 [2024-12-05 19:37:49.643056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:86816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.521 [2024-12-05 19:37:49.643070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.521 [2024-12-05 19:37:49.643086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:86824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.521 [2024-12-05 19:37:49.643100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.521 [2024-12-05 19:37:49.643116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:86232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.521 [2024-12-05 19:37:49.643145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.521 [2024-12-05 19:37:49.643165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:86240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.521 [2024-12-05 19:37:49.643179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.521 [2024-12-05 19:37:49.643195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:86248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.521 [2024-12-05 19:37:49.643209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.521 [2024-12-05 19:37:49.643234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:86256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.521 [2024-12-05 19:37:49.643249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.521 [2024-12-05 19:37:49.643264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:86264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.521 [2024-12-05 19:37:49.643278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.521 [2024-12-05 19:37:49.643294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:86272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.521 [2024-12-05 19:37:49.643307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.521 [2024-12-05 19:37:49.643325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:86280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.521 [2024-12-05 19:37:49.643340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.522 [2024-12-05 19:37:49.643355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:86288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.522 [2024-12-05 19:37:49.643369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.522 [2024-12-05 19:37:49.643385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:86832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.522 [2024-12-05 19:37:49.643399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.522 [2024-12-05 19:37:49.643415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:86840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.522 [2024-12-05 19:37:49.643429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.522 [2024-12-05 19:37:49.643444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:86848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.522 [2024-12-05 19:37:49.643458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.522 [2024-12-05 19:37:49.643473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:86856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.522 [2024-12-05 19:37:49.643487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.522 [2024-12-05 19:37:49.643503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:86864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.522 [2024-12-05 19:37:49.643516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.522 [2024-12-05 19:37:49.643532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:86872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.522 [2024-12-05 19:37:49.643546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.522 [2024-12-05 19:37:49.643561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:86880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.522 [2024-12-05 19:37:49.643575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.522 [2024-12-05 19:37:49.643591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:86888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.522 [2024-12-05 19:37:49.643612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.522 [2024-12-05 19:37:49.643628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:86896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.522 [2024-12-05 19:37:49.643642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.522 [2024-12-05 19:37:49.643658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:86904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.522 [2024-12-05 19:37:49.643672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.522 [2024-12-05 19:37:49.643687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:86912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.522 [2024-12-05 19:37:49.643701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.522 [2024-12-05 19:37:49.643718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:86920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.522 [2024-12-05 19:37:49.643732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.522 [2024-12-05 19:37:49.643747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:86928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.522 [2024-12-05 19:37:49.643761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.522 [2024-12-05 19:37:49.643805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:86296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.522 [2024-12-05 19:37:49.643821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.522 [2024-12-05 19:37:49.643838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:86304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.522 [2024-12-05 19:37:49.643852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.522 [2024-12-05 19:37:49.643867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:86312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.522 [2024-12-05 19:37:49.643881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.522 [2024-12-05 19:37:49.643897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:86320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.522 [2024-12-05 19:37:49.643911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.522 [2024-12-05 19:37:49.643926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:86328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.522 [2024-12-05 19:37:49.643949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.522 [2024-12-05 19:37:49.643965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:86336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.522 [2024-12-05 19:37:49.643978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.522 [2024-12-05 19:37:49.643994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:86344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.522 [2024-12-05 19:37:49.644007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.522 [2024-12-05 19:37:49.644031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:86352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.522 [2024-12-05 19:37:49.644047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.522 [2024-12-05 19:37:49.644062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:86360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.522 [2024-12-05 19:37:49.644076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.522 [2024-12-05 19:37:49.644092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:86368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.522 [2024-12-05 19:37:49.644106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.522 [2024-12-05 19:37:49.644136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:86376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.522 [2024-12-05 19:37:49.644151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.522 [2024-12-05 19:37:49.644167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:86384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.522 [2024-12-05 19:37:49.644181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.522 [2024-12-05 19:37:49.644196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:86392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.522 [2024-12-05 19:37:49.644210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.522 [2024-12-05 19:37:49.644226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:86400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.522 [2024-12-05 19:37:49.644247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.522 [2024-12-05 19:37:49.644264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:86408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.522 [2024-12-05 19:37:49.644278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.522 [2024-12-05 19:37:49.644294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:86416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.522 [2024-12-05 19:37:49.644308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.522 [2024-12-05 19:37:49.644324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:86936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.522 [2024-12-05 19:37:49.644338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.522 [2024-12-05 19:37:49.644353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:86944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.522 [2024-12-05 19:37:49.644368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.522 [2024-12-05 19:37:49.644383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:86952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.522 [2024-12-05 19:37:49.644397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.522 [2024-12-05 19:37:49.644412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:86960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.522 [2024-12-05 19:37:49.644433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.522 [2024-12-05 19:37:49.644450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:86968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.523 [2024-12-05 19:37:49.644464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.523 [2024-12-05 19:37:49.644480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:86976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.523 [2024-12-05 19:37:49.644494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.523 [2024-12-05 19:37:49.644510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:86984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.523 [2024-12-05 19:37:49.644524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.523 [2024-12-05 19:37:49.644540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:86992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.523 [2024-12-05 19:37:49.644554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.523 [2024-12-05 19:37:49.644569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:86424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.523 [2024-12-05 19:37:49.644583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.523 [2024-12-05 19:37:49.644599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:86432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.523 [2024-12-05 19:37:49.644614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.523 [2024-12-05 19:37:49.644629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:86440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.523 [2024-12-05 19:37:49.644643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.523 [2024-12-05 19:37:49.644658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:86448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.523 [2024-12-05 19:37:49.644672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.523 [2024-12-05 19:37:49.644688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:86456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.523 [2024-12-05 19:37:49.644702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.523 [2024-12-05 19:37:49.644718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:86464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.523 [2024-12-05 19:37:49.644732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.523 [2024-12-05 19:37:49.644749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:86472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.523 [2024-12-05 19:37:49.644763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.523 [2024-12-05 19:37:49.644778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:86480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.523 [2024-12-05 19:37:49.644792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.523 [2024-12-05 19:37:49.644808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:86488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.523 [2024-12-05 19:37:49.644828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.523 [2024-12-05 19:37:49.644845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:86496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.523 [2024-12-05 19:37:49.644859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.523 [2024-12-05 19:37:49.644875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:86504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.523 [2024-12-05 19:37:49.644888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.523 [2024-12-05 19:37:49.644904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:86512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.523 [2024-12-05 19:37:49.644918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.523 [2024-12-05 19:37:49.644933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:86520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.523 [2024-12-05 19:37:49.644947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.523 [2024-12-05 19:37:49.644963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:86528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.523 [2024-12-05 19:37:49.644977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.523 [2024-12-05 19:37:49.644992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:86536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.523 [2024-12-05 19:37:49.645006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.523 [2024-12-05 19:37:49.645022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:86544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.523 [2024-12-05 19:37:49.645036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.523 [2024-12-05 19:37:49.645058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:86552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.523 [2024-12-05 19:37:49.645072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.523 [2024-12-05 19:37:49.645088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:86560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.523 [2024-12-05 19:37:49.645102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.523 [2024-12-05 19:37:49.645130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:86568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.523 [2024-12-05 19:37:49.645145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.523 [2024-12-05 19:37:49.645160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:86576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.523 [2024-12-05 19:37:49.645174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.523 [2024-12-05 19:37:49.645190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:86584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.523 [2024-12-05 19:37:49.645204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.523 [2024-12-05 19:37:49.645227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:86592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.523 [2024-12-05 19:37:49.645244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.523 [2024-12-05 19:37:49.645285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:86600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.523 [2024-12-05 19:37:49.645301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.523 [2024-12-05 19:37:49.645317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:86608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.523 [2024-12-05 19:37:49.645331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.523 [2024-12-05 19:37:49.645347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:87000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.523 [2024-12-05 19:37:49.645360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.523 [2024-12-05 19:37:49.645376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:87008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.523 [2024-12-05 19:37:49.645390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.523 [2024-12-05 19:37:49.645406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:87016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.523 [2024-12-05 19:37:49.645420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.523 [2024-12-05 19:37:49.645435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:87024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.523 [2024-12-05 19:37:49.645449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.523 [2024-12-05 19:37:49.645465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:87032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.523 [2024-12-05 19:37:49.645479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.523 [2024-12-05 19:37:49.645495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.523 [2024-12-05 19:37:49.645508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.523 [2024-12-05 19:37:49.645524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:87048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.523 [2024-12-05 19:37:49.645538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.523 [2024-12-05 19:37:49.645554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.523 [2024-12-05 19:37:49.645568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.523 [2024-12-05 19:37:49.645584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:87064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.523 [2024-12-05 19:37:49.645598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.523 [2024-12-05 19:37:49.645614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:87072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.523 [2024-12-05 19:37:49.645636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.523 [2024-12-05 19:37:49.645652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:87080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.523 [2024-12-05 19:37:49.645666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.523 [2024-12-05 19:37:49.645682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:87088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.523 [2024-12-05 19:37:49.645696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.523 [2024-12-05 19:37:49.645711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:87096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.524 [2024-12-05 19:37:49.645725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.524 [2024-12-05 19:37:49.645741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:87104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.524 [2024-12-05 19:37:49.645755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.524 [2024-12-05 19:37:49.645771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:87112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.524 [2024-12-05 19:37:49.645785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.524 [2024-12-05 19:37:49.645801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:87120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.524 [2024-12-05 19:37:49.645815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.524 [2024-12-05 19:37:49.645830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:86616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.524 [2024-12-05 19:37:49.645845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.524 [2024-12-05 19:37:49.645860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:86624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.524 [2024-12-05 19:37:49.645874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.524 [2024-12-05 19:37:49.645890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:86632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.524 [2024-12-05 19:37:49.645904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.524 [2024-12-05 19:37:49.645919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:86640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.524 [2024-12-05 19:37:49.645933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.524 [2024-12-05 19:37:49.645948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:86648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.524 [2024-12-05 19:37:49.645962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.524 [2024-12-05 19:37:49.645978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:86656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.524 [2024-12-05 19:37:49.645992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.524 [2024-12-05 19:37:49.646014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:86664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.524 [2024-12-05 19:37:49.646028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.524 [2024-12-05 19:37:49.646045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:86672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.524 [2024-12-05 19:37:49.646059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.524 [2024-12-05 19:37:49.646074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:86680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.524 [2024-12-05 19:37:49.646088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.524 [2024-12-05 19:37:49.646104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:86688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.524 [2024-12-05 19:37:49.646131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.524 [2024-12-05 19:37:49.646149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:86696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.524 [2024-12-05 19:37:49.646163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.524 [2024-12-05 19:37:49.646179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:86704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.524 [2024-12-05 19:37:49.646193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.524 [2024-12-05 19:37:49.646214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:86712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.524 [2024-12-05 19:37:49.646227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.524 [2024-12-05 19:37:49.646243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:86720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.524 [2024-12-05 19:37:49.646268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.524 [2024-12-05 19:37:49.646285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:86728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.524 [2024-12-05 19:37:49.646299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.524 [2024-12-05 19:37:49.646314] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x62d370 is same with the state(6) to be set 00:18:07.524 [2024-12-05 19:37:49.646332] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:07.524 [2024-12-05 19:37:49.646343] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:07.524 [2024-12-05 19:37:49.646354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86736 len:8 PRP1 0x0 PRP2 0x0 00:18:07.524 [2024-12-05 19:37:49.646367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.524 [2024-12-05 19:37:49.646382] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:07.524 [2024-12-05 19:37:49.646392] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:07.524 [2024-12-05 19:37:49.646402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87128 len:8 PRP1 0x0 PRP2 0x0 00:18:07.524 [2024-12-05 19:37:49.646416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.524 [2024-12-05 19:37:49.646438] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:07.524 [2024-12-05 19:37:49.646449] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:07.524 [2024-12-05 19:37:49.646459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87136 len:8 PRP1 0x0 PRP2 0x0 00:18:07.524 [2024-12-05 19:37:49.646473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.524 [2024-12-05 19:37:49.646486] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:07.524 [2024-12-05 19:37:49.646496] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:07.524 [2024-12-05 19:37:49.646507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87144 len:8 PRP1 0x0 PRP2 0x0 00:18:07.524 [2024-12-05 19:37:49.646520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.524 [2024-12-05 19:37:49.646534] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:07.524 [2024-12-05 19:37:49.646544] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:07.524 [2024-12-05 19:37:49.646554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87152 len:8 PRP1 0x0 PRP2 0x0 00:18:07.524 [2024-12-05 19:37:49.646568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.524 [2024-12-05 19:37:49.646581] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:07.524 [2024-12-05 19:37:49.646591] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:07.524 [2024-12-05 19:37:49.646601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87160 len:8 PRP1 0x0 PRP2 0x0 00:18:07.524 [2024-12-05 19:37:49.646615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.524 [2024-12-05 19:37:49.646628] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:07.524 [2024-12-05 19:37:49.646638] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:07.524 [2024-12-05 19:37:49.646648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87168 len:8 PRP1 0x0 PRP2 0x0 00:18:07.524 [2024-12-05 19:37:49.646661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.524 [2024-12-05 19:37:49.646680] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:07.524 [2024-12-05 19:37:49.646690] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:07.524 [2024-12-05 19:37:49.646701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87176 len:8 PRP1 0x0 PRP2 0x0 00:18:07.524 [2024-12-05 19:37:49.646714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.524 [2024-12-05 19:37:49.646728] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:07.524 [2024-12-05 19:37:49.646738] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:07.524 [2024-12-05 19:37:49.646748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87184 len:8 PRP1 0x0 PRP2 0x0 00:18:07.524 [2024-12-05 19:37:49.646761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.524 [2024-12-05 19:37:49.646775] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:07.524 [2024-12-05 19:37:49.646785] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:07.524 [2024-12-05 19:37:49.646795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87192 len:8 PRP1 0x0 PRP2 0x0 00:18:07.524 [2024-12-05 19:37:49.646818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.524 [2024-12-05 19:37:49.646833] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:07.524 [2024-12-05 19:37:49.646843] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:07.524 [2024-12-05 19:37:49.646853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87200 len:8 PRP1 0x0 PRP2 0x0 00:18:07.524 [2024-12-05 19:37:49.646867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.524 [2024-12-05 19:37:49.646880] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:07.524 [2024-12-05 19:37:49.646891] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:07.524 [2024-12-05 19:37:49.646901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87208 len:8 PRP1 0x0 PRP2 0x0 00:18:07.524 [2024-12-05 19:37:49.646914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.524 [2024-12-05 19:37:49.646929] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:07.525 [2024-12-05 19:37:49.646939] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:07.525 [2024-12-05 19:37:49.646949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87216 len:8 PRP1 0x0 PRP2 0x0 00:18:07.525 [2024-12-05 19:37:49.646963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.525 [2024-12-05 19:37:49.646976] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:07.525 [2024-12-05 19:37:49.646986] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:07.525 [2024-12-05 19:37:49.646996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87224 len:8 PRP1 0x0 PRP2 0x0 00:18:07.525 [2024-12-05 19:37:49.647010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.525 [2024-12-05 19:37:49.647023] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:07.525 [2024-12-05 19:37:49.647033] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:07.525 [2024-12-05 19:37:49.647043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87232 len:8 PRP1 0x0 PRP2 0x0 00:18:07.525 [2024-12-05 19:37:49.647058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.525 [2024-12-05 19:37:49.647081] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:07.525 [2024-12-05 19:37:49.647092] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:07.525 [2024-12-05 19:37:49.647103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87240 len:8 PRP1 0x0 PRP2 0x0 00:18:07.525 [2024-12-05 19:37:49.647128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.525 [2024-12-05 19:37:49.647143] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:07.525 [2024-12-05 19:37:49.647153] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:07.525 [2024-12-05 19:37:49.647164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87248 len:8 PRP1 0x0 PRP2 0x0 00:18:07.525 [2024-12-05 19:37:49.647177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.525 [2024-12-05 19:37:49.647239] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.3:4421 to 10.0.0.3:4422 00:18:07.525 [2024-12-05 19:37:49.647308] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:07.525 [2024-12-05 19:37:49.647332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.525 [2024-12-05 19:37:49.647348] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:07.525 [2024-12-05 19:37:49.647361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.525 [2024-12-05 19:37:49.647376] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:07.525 [2024-12-05 19:37:49.647390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.525 [2024-12-05 19:37:49.647404] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:07.525 [2024-12-05 19:37:49.647417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.525 [2024-12-05 19:37:49.647431] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:18:07.525 [2024-12-05 19:37:49.651298] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:18:07.525 [2024-12-05 19:37:49.651340] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b9c60 (9): Bad file descriptor 00:18:07.525 [2024-12-05 19:37:49.682935] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:18:07.525 8648.60 IOPS, 33.78 MiB/s [2024-12-05T19:38:00.962Z] 8752.50 IOPS, 34.19 MiB/s [2024-12-05T19:38:00.962Z] 8822.14 IOPS, 34.46 MiB/s [2024-12-05T19:38:00.962Z] 8886.38 IOPS, 34.71 MiB/s [2024-12-05T19:38:00.962Z] 8927.44 IOPS, 34.87 MiB/s [2024-12-05T19:38:00.962Z] [2024-12-05 19:37:54.242185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:41224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.525 [2024-12-05 19:37:54.242254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.525 [2024-12-05 19:37:54.242298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:41232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.525 [2024-12-05 19:37:54.242314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.525 [2024-12-05 19:37:54.242329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:41240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.525 [2024-12-05 19:37:54.242342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.525 [2024-12-05 19:37:54.242357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:41248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.525 [2024-12-05 19:37:54.242370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.525 [2024-12-05 19:37:54.242384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:41256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.525 [2024-12-05 19:37:54.242398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.525 [2024-12-05 19:37:54.242412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:41264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.525 [2024-12-05 19:37:54.242425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.525 [2024-12-05 19:37:54.242440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:41272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.525 [2024-12-05 19:37:54.242477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.525 [2024-12-05 19:37:54.242493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:41280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.525 [2024-12-05 19:37:54.242507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.525 [2024-12-05 19:37:54.242521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:40840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.525 [2024-12-05 19:37:54.242534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.525 [2024-12-05 19:37:54.242549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:40848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.525 [2024-12-05 19:37:54.242561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.525 [2024-12-05 19:37:54.242576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:40856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.525 [2024-12-05 19:37:54.242589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.525 [2024-12-05 19:37:54.242603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:40864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.525 [2024-12-05 19:37:54.242616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.525 [2024-12-05 19:37:54.242630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:40872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.525 [2024-12-05 19:37:54.242643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.525 [2024-12-05 19:37:54.242658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:40880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.525 [2024-12-05 19:37:54.242671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.525 [2024-12-05 19:37:54.242685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:40888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.525 [2024-12-05 19:37:54.242698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.525 [2024-12-05 19:37:54.242712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:40896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.525 [2024-12-05 19:37:54.242725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.525 [2024-12-05 19:37:54.242739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:41288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.525 [2024-12-05 19:37:54.242752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.525 [2024-12-05 19:37:54.242769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:41296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.525 [2024-12-05 19:37:54.242782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.525 [2024-12-05 19:37:54.242797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:41304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.525 [2024-12-05 19:37:54.242810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.525 [2024-12-05 19:37:54.242832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:41312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.525 [2024-12-05 19:37:54.242847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.525 [2024-12-05 19:37:54.242877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:41320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.525 [2024-12-05 19:37:54.242891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.525 [2024-12-05 19:37:54.242906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:41328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.525 [2024-12-05 19:37:54.242920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.525 [2024-12-05 19:37:54.242935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:41336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.525 [2024-12-05 19:37:54.242948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.525 [2024-12-05 19:37:54.242964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:41344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.525 [2024-12-05 19:37:54.242977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.525 [2024-12-05 19:37:54.242993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:40904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.526 [2024-12-05 19:37:54.243006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.526 [2024-12-05 19:37:54.243021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:40912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.526 [2024-12-05 19:37:54.243035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.526 [2024-12-05 19:37:54.243050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:40920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.526 [2024-12-05 19:37:54.243063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.526 [2024-12-05 19:37:54.243078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:40928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.526 [2024-12-05 19:37:54.243091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.526 [2024-12-05 19:37:54.243107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:40936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.526 [2024-12-05 19:37:54.243120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.526 [2024-12-05 19:37:54.243152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:40944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.526 [2024-12-05 19:37:54.243180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.526 [2024-12-05 19:37:54.243197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:40952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.526 [2024-12-05 19:37:54.243211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.526 [2024-12-05 19:37:54.243227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:40960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.526 [2024-12-05 19:37:54.243241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.526 [2024-12-05 19:37:54.243281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:41352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.526 [2024-12-05 19:37:54.243297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.526 [2024-12-05 19:37:54.243329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:41360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.526 [2024-12-05 19:37:54.243343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.526 [2024-12-05 19:37:54.243358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:41368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.526 [2024-12-05 19:37:54.243371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.526 [2024-12-05 19:37:54.243386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:41376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.526 [2024-12-05 19:37:54.243400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.526 [2024-12-05 19:37:54.243415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:41384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.526 [2024-12-05 19:37:54.243429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.526 [2024-12-05 19:37:54.243444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:41392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.526 [2024-12-05 19:37:54.243458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.526 [2024-12-05 19:37:54.243473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:41400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.526 [2024-12-05 19:37:54.243487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.526 [2024-12-05 19:37:54.243502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:41408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.526 [2024-12-05 19:37:54.243515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.526 [2024-12-05 19:37:54.243544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:41416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.526 [2024-12-05 19:37:54.243557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.526 [2024-12-05 19:37:54.243572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:41424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.526 [2024-12-05 19:37:54.243585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.526 [2024-12-05 19:37:54.243600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:41432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.526 [2024-12-05 19:37:54.243613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.526 [2024-12-05 19:37:54.243628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:41440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.526 [2024-12-05 19:37:54.243641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.526 [2024-12-05 19:37:54.243655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:41448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.526 [2024-12-05 19:37:54.243676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.526 [2024-12-05 19:37:54.243691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:41456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.526 [2024-12-05 19:37:54.243704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.526 [2024-12-05 19:37:54.243719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:41464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.526 [2024-12-05 19:37:54.243732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.526 [2024-12-05 19:37:54.243747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:41472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.526 [2024-12-05 19:37:54.243760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.526 [2024-12-05 19:37:54.243802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:40968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.526 [2024-12-05 19:37:54.243820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.526 [2024-12-05 19:37:54.243837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:40976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.526 [2024-12-05 19:37:54.243851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.526 [2024-12-05 19:37:54.243868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:40984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.526 [2024-12-05 19:37:54.243881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.526 [2024-12-05 19:37:54.243897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:40992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.526 [2024-12-05 19:37:54.243911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.526 [2024-12-05 19:37:54.243927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:41000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.526 [2024-12-05 19:37:54.243942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.526 [2024-12-05 19:37:54.243957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:41008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.526 [2024-12-05 19:37:54.243971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.526 [2024-12-05 19:37:54.243987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:41016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.526 [2024-12-05 19:37:54.244001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.526 [2024-12-05 19:37:54.244017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:41024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.526 [2024-12-05 19:37:54.244031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.526 [2024-12-05 19:37:54.244047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:41032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.526 [2024-12-05 19:37:54.244061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.526 [2024-12-05 19:37:54.244084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:41040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.526 [2024-12-05 19:37:54.244099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.526 [2024-12-05 19:37:54.244116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:41048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.526 [2024-12-05 19:37:54.244140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.526 [2024-12-05 19:37:54.244158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:41056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.526 [2024-12-05 19:37:54.244173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.526 [2024-12-05 19:37:54.244189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:41064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.527 [2024-12-05 19:37:54.244203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.527 [2024-12-05 19:37:54.244233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:41072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.527 [2024-12-05 19:37:54.244247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.527 [2024-12-05 19:37:54.244262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:41080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.527 [2024-12-05 19:37:54.244276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.527 [2024-12-05 19:37:54.244306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:41088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.527 [2024-12-05 19:37:54.244319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.527 [2024-12-05 19:37:54.244334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:41480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.527 [2024-12-05 19:37:54.244347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.527 [2024-12-05 19:37:54.244363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:41488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.527 [2024-12-05 19:37:54.244376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.527 [2024-12-05 19:37:54.244391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:41496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.527 [2024-12-05 19:37:54.244405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.527 [2024-12-05 19:37:54.244419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:41504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.527 [2024-12-05 19:37:54.244433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.527 [2024-12-05 19:37:54.244447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:41512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.527 [2024-12-05 19:37:54.244461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.527 [2024-12-05 19:37:54.244475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:41520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.527 [2024-12-05 19:37:54.244495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.527 [2024-12-05 19:37:54.244511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:41528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.527 [2024-12-05 19:37:54.244525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.527 [2024-12-05 19:37:54.244539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:41536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.527 [2024-12-05 19:37:54.244552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.527 [2024-12-05 19:37:54.244567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:41544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.527 [2024-12-05 19:37:54.244580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.527 [2024-12-05 19:37:54.244595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:41552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.527 [2024-12-05 19:37:54.244609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.527 [2024-12-05 19:37:54.244624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:41560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.527 [2024-12-05 19:37:54.244637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.527 [2024-12-05 19:37:54.244652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:41568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.527 [2024-12-05 19:37:54.244681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.527 [2024-12-05 19:37:54.244696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:41576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.527 [2024-12-05 19:37:54.244710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.527 [2024-12-05 19:37:54.244725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:41584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.527 [2024-12-05 19:37:54.244739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.527 [2024-12-05 19:37:54.244756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:41592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.527 [2024-12-05 19:37:54.244770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.527 [2024-12-05 19:37:54.244785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:41600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.527 [2024-12-05 19:37:54.244798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.527 [2024-12-05 19:37:54.244830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:41096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.527 [2024-12-05 19:37:54.244844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.527 [2024-12-05 19:37:54.244859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:41104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.527 [2024-12-05 19:37:54.244873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.527 [2024-12-05 19:37:54.244889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:41112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.527 [2024-12-05 19:37:54.244922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.527 [2024-12-05 19:37:54.244939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:41120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.527 [2024-12-05 19:37:54.244953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.527 [2024-12-05 19:37:54.244968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:41128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.527 [2024-12-05 19:37:54.244982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.527 [2024-12-05 19:37:54.244998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:41136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.527 [2024-12-05 19:37:54.245012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.527 [2024-12-05 19:37:54.245042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:41144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.527 [2024-12-05 19:37:54.245056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.527 [2024-12-05 19:37:54.245071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:41152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.527 [2024-12-05 19:37:54.245085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.527 [2024-12-05 19:37:54.245100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:41608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.527 [2024-12-05 19:37:54.245113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.527 [2024-12-05 19:37:54.245128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:41616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.527 [2024-12-05 19:37:54.245142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.527 [2024-12-05 19:37:54.245173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:41624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.527 [2024-12-05 19:37:54.245187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.527 [2024-12-05 19:37:54.245227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:41632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.527 [2024-12-05 19:37:54.245243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.527 [2024-12-05 19:37:54.245259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:41640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.527 [2024-12-05 19:37:54.245272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.527 [2024-12-05 19:37:54.245288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:41648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.527 [2024-12-05 19:37:54.245308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.527 [2024-12-05 19:37:54.245325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:41656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.527 [2024-12-05 19:37:54.245338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.527 [2024-12-05 19:37:54.245361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:41664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.527 [2024-12-05 19:37:54.245375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.527 [2024-12-05 19:37:54.245391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:41672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.527 [2024-12-05 19:37:54.245404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.527 [2024-12-05 19:37:54.245419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:41680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.527 [2024-12-05 19:37:54.245432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.527 [2024-12-05 19:37:54.245448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:41688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.527 [2024-12-05 19:37:54.245461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.527 [2024-12-05 19:37:54.245476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:41696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.527 [2024-12-05 19:37:54.245490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.527 [2024-12-05 19:37:54.245505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:41704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.528 [2024-12-05 19:37:54.245518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.528 [2024-12-05 19:37:54.245533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:41712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.528 [2024-12-05 19:37:54.245562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.528 [2024-12-05 19:37:54.245576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:41720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.528 [2024-12-05 19:37:54.245589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.528 [2024-12-05 19:37:54.245604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:41728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.528 [2024-12-05 19:37:54.245617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.528 [2024-12-05 19:37:54.245632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:41160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.528 [2024-12-05 19:37:54.245645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.528 [2024-12-05 19:37:54.245660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:41168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.528 [2024-12-05 19:37:54.245673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.528 [2024-12-05 19:37:54.245687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:41176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.528 [2024-12-05 19:37:54.245700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.528 [2024-12-05 19:37:54.245715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:41184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.528 [2024-12-05 19:37:54.245734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.528 [2024-12-05 19:37:54.245749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:41192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.528 [2024-12-05 19:37:54.245763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.528 [2024-12-05 19:37:54.245778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:41200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.528 [2024-12-05 19:37:54.245799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.528 [2024-12-05 19:37:54.245814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:41208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.528 [2024-12-05 19:37:54.245827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.528 [2024-12-05 19:37:54.245841] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6298b0 is same with the state(6) to be set 00:18:07.528 [2024-12-05 19:37:54.245857] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:07.528 [2024-12-05 19:37:54.245868] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:07.528 [2024-12-05 19:37:54.245878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:41216 len:8 PRP1 0x0 PRP2 0x0 00:18:07.528 [2024-12-05 19:37:54.245891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.528 [2024-12-05 19:37:54.245905] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:07.528 [2024-12-05 19:37:54.245914] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:07.528 [2024-12-05 19:37:54.245941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41736 len:8 PRP1 0x0 PRP2 0x0 00:18:07.528 [2024-12-05 19:37:54.245954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.528 [2024-12-05 19:37:54.245968] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:07.528 [2024-12-05 19:37:54.245977] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:07.528 [2024-12-05 19:37:54.245987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41744 len:8 PRP1 0x0 PRP2 0x0 00:18:07.528 [2024-12-05 19:37:54.246001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.528 [2024-12-05 19:37:54.246014] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:07.528 [2024-12-05 19:37:54.246024] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:07.528 [2024-12-05 19:37:54.246034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41752 len:8 PRP1 0x0 PRP2 0x0 00:18:07.528 [2024-12-05 19:37:54.246048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.528 [2024-12-05 19:37:54.246078] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:07.528 [2024-12-05 19:37:54.246088] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:07.528 [2024-12-05 19:37:54.246098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41760 len:8 PRP1 0x0 PRP2 0x0 00:18:07.528 [2024-12-05 19:37:54.246111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.528 [2024-12-05 19:37:54.246125] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:07.528 [2024-12-05 19:37:54.246146] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:07.528 [2024-12-05 19:37:54.246170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41768 len:8 PRP1 0x0 PRP2 0x0 00:18:07.528 [2024-12-05 19:37:54.246186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.528 [2024-12-05 19:37:54.246201] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:07.528 [2024-12-05 19:37:54.246211] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:07.528 [2024-12-05 19:37:54.246221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41776 len:8 PRP1 0x0 PRP2 0x0 00:18:07.528 [2024-12-05 19:37:54.246235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.528 [2024-12-05 19:37:54.246254] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:07.528 [2024-12-05 19:37:54.246265] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:07.528 [2024-12-05 19:37:54.246275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41784 len:8 PRP1 0x0 PRP2 0x0 00:18:07.528 [2024-12-05 19:37:54.246288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.528 [2024-12-05 19:37:54.246302] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:07.528 [2024-12-05 19:37:54.246312] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:07.528 [2024-12-05 19:37:54.246323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41792 len:8 PRP1 0x0 PRP2 0x0 00:18:07.528 [2024-12-05 19:37:54.246336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.528 [2024-12-05 19:37:54.246349] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:07.528 [2024-12-05 19:37:54.246359] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:07.528 [2024-12-05 19:37:54.246385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41800 len:8 PRP1 0x0 PRP2 0x0 00:18:07.528 [2024-12-05 19:37:54.246398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.528 [2024-12-05 19:37:54.246411] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:07.528 [2024-12-05 19:37:54.246420] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:07.528 [2024-12-05 19:37:54.246430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41808 len:8 PRP1 0x0 PRP2 0x0 00:18:07.528 [2024-12-05 19:37:54.246443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.528 [2024-12-05 19:37:54.246456] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:07.528 [2024-12-05 19:37:54.246466] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:07.528 [2024-12-05 19:37:54.246491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41816 len:8 PRP1 0x0 PRP2 0x0 00:18:07.528 [2024-12-05 19:37:54.246503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.528 [2024-12-05 19:37:54.246516] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:07.528 [2024-12-05 19:37:54.246526] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:07.528 [2024-12-05 19:37:54.246536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41824 len:8 PRP1 0x0 PRP2 0x0 00:18:07.528 [2024-12-05 19:37:54.246548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.528 [2024-12-05 19:37:54.246568] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:07.528 [2024-12-05 19:37:54.246578] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:07.528 [2024-12-05 19:37:54.246588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41832 len:8 PRP1 0x0 PRP2 0x0 00:18:07.528 [2024-12-05 19:37:54.246601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.528 [2024-12-05 19:37:54.246614] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:07.528 [2024-12-05 19:37:54.246623] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:07.528 [2024-12-05 19:37:54.246633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41840 len:8 PRP1 0x0 PRP2 0x0 00:18:07.528 [2024-12-05 19:37:54.246645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.528 [2024-12-05 19:37:54.246663] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:07.528 [2024-12-05 19:37:54.246674] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:07.528 [2024-12-05 19:37:54.246683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41848 len:8 PRP1 0x0 PRP2 0x0 00:18:07.528 [2024-12-05 19:37:54.246696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.528 [2024-12-05 19:37:54.246709] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:07.528 [2024-12-05 19:37:54.246718] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:07.528 [2024-12-05 19:37:54.246728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41856 len:8 PRP1 0x0 PRP2 0x0 00:18:07.529 [2024-12-05 19:37:54.246741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.529 [2024-12-05 19:37:54.246801] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.3:4422 to 10.0.0.3:4420 00:18:07.529 [2024-12-05 19:37:54.246874] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:07.529 [2024-12-05 19:37:54.246897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.529 [2024-12-05 19:37:54.246912] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:07.529 [2024-12-05 19:37:54.246926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.529 [2024-12-05 19:37:54.246940] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:07.529 [2024-12-05 19:37:54.246953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.529 [2024-12-05 19:37:54.246968] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:07.529 [2024-12-05 19:37:54.246981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.529 [2024-12-05 19:37:54.246995] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:18:07.529 [2024-12-05 19:37:54.247029] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b9c60 (9): Bad file descriptor 00:18:07.529 [2024-12-05 19:37:54.250763] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:18:07.529 [2024-12-05 19:37:54.274569] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:18:07.529 8941.80 IOPS, 34.93 MiB/s [2024-12-05T19:38:00.966Z] 9001.64 IOPS, 35.16 MiB/s [2024-12-05T19:38:00.966Z] 9024.17 IOPS, 35.25 MiB/s [2024-12-05T19:38:00.966Z] 9056.15 IOPS, 35.38 MiB/s [2024-12-05T19:38:00.966Z] 9085.86 IOPS, 35.49 MiB/s [2024-12-05T19:38:00.966Z] 9110.00 IOPS, 35.59 MiB/s 00:18:07.529 Latency(us) 00:18:07.529 [2024-12-05T19:38:00.966Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:07.529 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:07.529 Verification LBA range: start 0x0 length 0x4000 00:18:07.529 NVMe0n1 : 15.01 9112.14 35.59 228.46 0.00 13671.64 659.08 15371.17 00:18:07.529 [2024-12-05T19:38:00.966Z] =================================================================================================================== 00:18:07.529 [2024-12-05T19:38:00.966Z] Total : 9112.14 35.59 228.46 0.00 13671.64 659.08 15371.17 00:18:07.529 Received shutdown signal, test time was about 15.000000 seconds 00:18:07.529 00:18:07.529 Latency(us) 00:18:07.529 [2024-12-05T19:38:00.966Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:07.529 [2024-12-05T19:38:00.966Z] =================================================================================================================== 00:18:07.529 [2024-12-05T19:38:00.966Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:07.529 19:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:18:07.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:07.529 19:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:18:07.529 19:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:18:07.529 19:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=75463 00:18:07.529 19:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:18:07.529 19:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 75463 /var/tmp/bdevperf.sock 00:18:07.529 19:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 75463 ']' 00:18:07.529 19:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:07.529 19:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:07.529 19:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:07.529 19:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:07.529 19:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:07.529 19:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:07.529 19:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:18:07.529 19:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:18:07.529 [2024-12-05 19:38:00.645860] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:18:07.529 19:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:18:07.529 [2024-12-05 19:38:00.913858] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:18:07.786 19:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:18:08.043 NVMe0n1 00:18:08.043 19:38:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:18:08.299 00:18:08.299 19:38:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:18:08.555 00:18:08.555 19:38:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:08.555 19:38:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:18:08.833 19:38:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:09.400 19:38:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:18:12.728 19:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:12.728 19:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:18:12.728 19:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=75538 00:18:12.728 19:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:12.728 19:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 75538 00:18:13.663 { 00:18:13.663 "results": [ 00:18:13.663 { 00:18:13.663 "job": "NVMe0n1", 00:18:13.663 "core_mask": "0x1", 00:18:13.663 "workload": "verify", 00:18:13.663 "status": "finished", 00:18:13.663 "verify_range": { 00:18:13.663 "start": 0, 00:18:13.663 "length": 16384 00:18:13.663 }, 00:18:13.663 "queue_depth": 128, 00:18:13.663 "io_size": 4096, 00:18:13.663 "runtime": 1.005174, 00:18:13.663 "iops": 7024.6544379381085, 00:18:13.663 "mibps": 27.440056398195736, 00:18:13.663 "io_failed": 0, 00:18:13.663 "io_timeout": 0, 00:18:13.663 "avg_latency_us": 18147.10833129482, 00:18:13.663 "min_latency_us": 2159.7090909090907, 00:18:13.663 "max_latency_us": 15192.436363636363 00:18:13.663 } 00:18:13.663 ], 00:18:13.663 "core_count": 1 00:18:13.663 } 00:18:13.663 19:38:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:13.663 [2024-12-05 19:38:00.098315] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:18:13.663 [2024-12-05 19:38:00.098417] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75463 ] 00:18:13.663 [2024-12-05 19:38:00.239253] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:13.663 [2024-12-05 19:38:00.297729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:13.663 [2024-12-05 19:38:00.355277] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:13.663 [2024-12-05 19:38:02.534329] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:18:13.663 [2024-12-05 19:38:02.534482] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:13.663 [2024-12-05 19:38:02.534507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.663 [2024-12-05 19:38:02.534525] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:13.663 [2024-12-05 19:38:02.534539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.663 [2024-12-05 19:38:02.534553] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:13.663 [2024-12-05 19:38:02.534566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.663 [2024-12-05 19:38:02.534580] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:13.663 [2024-12-05 19:38:02.534593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.663 [2024-12-05 19:38:02.534606] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:18:13.663 [2024-12-05 19:38:02.534657] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:18:13.663 [2024-12-05 19:38:02.534689] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2158c60 (9): Bad file descriptor 00:18:13.663 [2024-12-05 19:38:02.540706] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:18:13.663 Running I/O for 1 seconds... 00:18:13.663 6933.00 IOPS, 27.08 MiB/s 00:18:13.663 Latency(us) 00:18:13.663 [2024-12-05T19:38:07.100Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:13.663 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:13.663 Verification LBA range: start 0x0 length 0x4000 00:18:13.663 NVMe0n1 : 1.01 7024.65 27.44 0.00 0.00 18147.11 2159.71 15192.44 00:18:13.663 [2024-12-05T19:38:07.100Z] =================================================================================================================== 00:18:13.663 [2024-12-05T19:38:07.100Z] Total : 7024.65 27.44 0.00 0.00 18147.11 2159.71 15192.44 00:18:13.663 19:38:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:18:13.663 19:38:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:13.922 19:38:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:14.180 19:38:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:14.180 19:38:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:18:14.439 19:38:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:14.699 19:38:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:18:18.013 19:38:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:18.013 19:38:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:18:18.013 19:38:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 75463 00:18:18.013 19:38:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 75463 ']' 00:18:18.013 19:38:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 75463 00:18:18.013 19:38:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:18:18.013 19:38:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:18.013 19:38:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75463 00:18:18.271 killing process with pid 75463 00:18:18.271 19:38:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:18.271 19:38:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:18.271 19:38:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75463' 00:18:18.271 19:38:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 75463 00:18:18.272 19:38:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 75463 00:18:18.272 19:38:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:18:18.272 19:38:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:18.839 19:38:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:18:18.839 19:38:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:18.839 19:38:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:18:18.839 19:38:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:18.839 19:38:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:18:18.839 19:38:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:18.839 19:38:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:18:18.839 19:38:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:18.839 19:38:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:18.839 rmmod nvme_tcp 00:18:18.839 rmmod nvme_fabrics 00:18:18.839 rmmod nvme_keyring 00:18:18.839 19:38:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:18.839 19:38:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:18:18.839 19:38:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:18:18.839 19:38:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 75210 ']' 00:18:18.839 19:38:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 75210 00:18:18.839 19:38:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 75210 ']' 00:18:18.839 19:38:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 75210 00:18:18.839 19:38:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:18:18.839 19:38:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:18.839 19:38:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75210 00:18:18.839 killing process with pid 75210 00:18:18.839 19:38:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:18.839 19:38:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:18.839 19:38:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75210' 00:18:18.839 19:38:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 75210 00:18:18.839 19:38:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 75210 00:18:19.098 19:38:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:19.098 19:38:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:19.098 19:38:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:19.098 19:38:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:18:19.098 19:38:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:18:19.098 19:38:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:19.098 19:38:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:18:19.098 19:38:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:19.098 19:38:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:19.098 19:38:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:19.098 19:38:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:19.098 19:38:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:19.098 19:38:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:19.098 19:38:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:19.098 19:38:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:19.098 19:38:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:19.098 19:38:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:19.098 19:38:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:19.098 19:38:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:19.098 19:38:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:19.098 19:38:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:19.098 19:38:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:19.357 19:38:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:19.357 19:38:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:19.357 19:38:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:19.357 19:38:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:19.357 19:38:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@300 -- # return 0 00:18:19.357 00:18:19.357 real 0m32.566s 00:18:19.357 user 2m5.958s 00:18:19.357 sys 0m5.581s 00:18:19.357 19:38:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:19.357 ************************************ 00:18:19.357 END TEST nvmf_failover 00:18:19.358 ************************************ 00:18:19.358 19:38:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:19.358 19:38:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:18:19.358 19:38:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:19.358 19:38:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:19.358 19:38:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:19.358 ************************************ 00:18:19.358 START TEST nvmf_host_discovery 00:18:19.358 ************************************ 00:18:19.358 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:18:19.358 * Looking for test storage... 00:18:19.358 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:19.358 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:19.358 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:19.358 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:18:19.358 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:19.358 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:19.358 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:19.358 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:19.358 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:18:19.358 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:18:19.618 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:18:19.618 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:18:19.618 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:18:19.618 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:18:19.618 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:18:19.618 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:19.618 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:18:19.618 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:18:19.618 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:19.618 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:19.618 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:18:19.618 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:18:19.618 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:19.618 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:18:19.618 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:18:19.618 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:18:19.618 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:18:19.618 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:19.618 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:18:19.618 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:18:19.618 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:19.618 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:19.618 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:18:19.618 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:19.618 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:19.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:19.618 --rc genhtml_branch_coverage=1 00:18:19.618 --rc genhtml_function_coverage=1 00:18:19.618 --rc genhtml_legend=1 00:18:19.618 --rc geninfo_all_blocks=1 00:18:19.618 --rc geninfo_unexecuted_blocks=1 00:18:19.618 00:18:19.618 ' 00:18:19.618 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:19.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:19.618 --rc genhtml_branch_coverage=1 00:18:19.618 --rc genhtml_function_coverage=1 00:18:19.618 --rc genhtml_legend=1 00:18:19.618 --rc geninfo_all_blocks=1 00:18:19.618 --rc geninfo_unexecuted_blocks=1 00:18:19.618 00:18:19.618 ' 00:18:19.618 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:19.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:19.618 --rc genhtml_branch_coverage=1 00:18:19.618 --rc genhtml_function_coverage=1 00:18:19.618 --rc genhtml_legend=1 00:18:19.618 --rc geninfo_all_blocks=1 00:18:19.618 --rc geninfo_unexecuted_blocks=1 00:18:19.618 00:18:19.618 ' 00:18:19.618 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:19.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:19.618 --rc genhtml_branch_coverage=1 00:18:19.618 --rc genhtml_function_coverage=1 00:18:19.618 --rc genhtml_legend=1 00:18:19.618 --rc geninfo_all_blocks=1 00:18:19.618 --rc geninfo_unexecuted_blocks=1 00:18:19.618 00:18:19.618 ' 00:18:19.618 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:19.618 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:18:19.618 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:19.618 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:19.618 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:19.618 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:19.618 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:19.618 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:19.618 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:19.618 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:19.618 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:19.618 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:19.618 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:18:19.618 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:18:19.618 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:19.618 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:19.618 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:19.618 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:19.618 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:19.618 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:18:19.618 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:19.618 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:19.618 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:19.618 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:19.618 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:19.618 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:19.618 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:18:19.618 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:19.618 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:18:19.618 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:19.618 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:19.618 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:19.618 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:19.619 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:19.619 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:19.619 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:19.619 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:19.619 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:19.619 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:19.619 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:18:19.619 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:18:19.619 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:18:19.619 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:18:19.619 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:18:19.619 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:18:19.619 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:18:19.619 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:19.619 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:19.619 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:19.619 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:19.619 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:19.619 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:19.619 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:19.619 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:19.619 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:19.619 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:19.619 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:19.619 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:19.619 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:19.619 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:19.619 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:19.619 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:19.619 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:19.619 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:19.619 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:19.619 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:19.619 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:19.619 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:19.619 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:19.619 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:19.619 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:19.619 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:19.619 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:19.619 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:19.619 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:19.619 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:19.619 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:19.619 Cannot find device "nvmf_init_br" 00:18:19.619 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:18:19.619 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:19.619 Cannot find device "nvmf_init_br2" 00:18:19.619 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:18:19.619 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:19.619 Cannot find device "nvmf_tgt_br" 00:18:19.619 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # true 00:18:19.619 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:19.619 Cannot find device "nvmf_tgt_br2" 00:18:19.619 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # true 00:18:19.619 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:19.619 Cannot find device "nvmf_init_br" 00:18:19.619 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # true 00:18:19.619 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:19.619 Cannot find device "nvmf_init_br2" 00:18:19.619 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # true 00:18:19.619 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:19.619 Cannot find device "nvmf_tgt_br" 00:18:19.619 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # true 00:18:19.619 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:19.619 Cannot find device "nvmf_tgt_br2" 00:18:19.619 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # true 00:18:19.619 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:19.619 Cannot find device "nvmf_br" 00:18:19.619 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # true 00:18:19.619 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:19.619 Cannot find device "nvmf_init_if" 00:18:19.619 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # true 00:18:19.619 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:19.619 Cannot find device "nvmf_init_if2" 00:18:19.619 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # true 00:18:19.619 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:19.619 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:19.619 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # true 00:18:19.619 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:19.619 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:19.619 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # true 00:18:19.619 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:19.619 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:19.619 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:19.619 19:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:19.619 19:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:19.619 19:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:19.878 19:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:19.878 19:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:19.878 19:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:19.878 19:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:19.878 19:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:19.878 19:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:19.878 19:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:19.878 19:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:19.878 19:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:19.878 19:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:19.879 19:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:19.879 19:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:19.879 19:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:19.879 19:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:19.879 19:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:19.879 19:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:19.879 19:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:19.879 19:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:19.879 19:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:19.879 19:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:19.879 19:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:19.879 19:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:19.879 19:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:19.879 19:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:19.879 19:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:19.879 19:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:19.879 19:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:19.879 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:19.879 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:18:19.879 00:18:19.879 --- 10.0.0.3 ping statistics --- 00:18:19.879 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:19.879 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:18:19.879 19:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:19.879 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:19.879 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.053 ms 00:18:19.879 00:18:19.879 --- 10.0.0.4 ping statistics --- 00:18:19.879 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:19.879 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:18:19.879 19:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:19.879 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:19.879 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:18:19.879 00:18:19.879 --- 10.0.0.1 ping statistics --- 00:18:19.879 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:19.879 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:18:19.879 19:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:19.879 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:19.879 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:18:19.879 00:18:19.879 --- 10.0.0.2 ping statistics --- 00:18:19.879 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:19.879 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:18:19.879 19:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:19.879 19:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@461 -- # return 0 00:18:19.879 19:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:19.879 19:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:19.879 19:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:19.879 19:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:19.879 19:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:19.879 19:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:19.879 19:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:19.879 19:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:18:19.879 19:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:19.879 19:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:19.879 19:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:19.879 19:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=75859 00:18:19.879 19:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:19.879 19:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 75859 00:18:19.879 19:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 75859 ']' 00:18:19.879 19:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:19.879 19:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:19.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:19.879 19:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:19.879 19:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:19.879 19:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:20.137 [2024-12-05 19:38:13.337147] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:18:20.137 [2024-12-05 19:38:13.337239] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:20.137 [2024-12-05 19:38:13.495243] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:20.137 [2024-12-05 19:38:13.555505] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:20.137 [2024-12-05 19:38:13.555585] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:20.137 [2024-12-05 19:38:13.555601] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:20.137 [2024-12-05 19:38:13.555612] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:20.137 [2024-12-05 19:38:13.555622] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:20.137 [2024-12-05 19:38:13.556077] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:20.395 [2024-12-05 19:38:13.617214] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:20.395 19:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:20.395 19:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:18:20.395 19:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:20.395 19:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:20.395 19:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:20.395 19:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:20.395 19:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:20.395 19:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.395 19:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:20.395 [2024-12-05 19:38:13.741087] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:20.395 19:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.395 19:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:18:20.395 19:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.395 19:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:20.395 [2024-12-05 19:38:13.749318] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:18:20.395 19:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.395 19:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:18:20.395 19:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.395 19:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:20.395 null0 00:18:20.395 19:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.395 19:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:18:20.395 19:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.395 19:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:20.395 null1 00:18:20.395 19:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.395 19:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:18:20.395 19:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.396 19:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:20.396 19:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.396 19:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=75889 00:18:20.396 19:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:18:20.396 19:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 75889 /tmp/host.sock 00:18:20.396 19:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 75889 ']' 00:18:20.396 19:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:18:20.396 19:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:20.396 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:18:20.396 19:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:18:20.396 19:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:20.396 19:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:20.654 [2024-12-05 19:38:13.843702] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:18:20.654 [2024-12-05 19:38:13.843850] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75889 ] 00:18:20.654 [2024-12-05 19:38:13.990582] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:20.654 [2024-12-05 19:38:14.043744] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:20.913 [2024-12-05 19:38:14.100104] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:20.913 19:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:20.913 19:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:18:20.913 19:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:20.913 19:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:18:20.913 19:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.913 19:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:20.913 19:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.913 19:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:18:20.913 19:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.913 19:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:20.913 19:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.913 19:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:18:20.913 19:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:18:20.913 19:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:20.913 19:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:20.913 19:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:20.913 19:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.913 19:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:20.913 19:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:20.913 19:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.913 19:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:18:20.913 19:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:18:20.913 19:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:20.913 19:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.913 19:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:20.913 19:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:20.913 19:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:20.913 19:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:20.913 19:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.913 19:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:18:20.913 19:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:18:20.913 19:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.913 19:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:20.913 19:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.913 19:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:18:20.913 19:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:20.913 19:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.913 19:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:20.913 19:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:20.913 19:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:20.913 19:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:20.913 19:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.172 19:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:18:21.172 19:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:18:21.172 19:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:21.172 19:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.172 19:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:21.172 19:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:21.172 19:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:21.172 19:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:21.172 19:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.172 19:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:18:21.172 19:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:18:21.172 19:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.172 19:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:21.172 19:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.172 19:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:18:21.172 19:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:21.172 19:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.172 19:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:21.172 19:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:21.172 19:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:21.172 19:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:21.172 19:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.172 19:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:18:21.172 19:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:18:21.172 19:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:21.172 19:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.172 19:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:21.172 19:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:21.172 19:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:21.172 19:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:21.172 19:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.172 19:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:18:21.172 19:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:18:21.172 19:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.172 19:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:21.172 [2024-12-05 19:38:14.557342] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:21.172 19:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.172 19:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:18:21.172 19:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:21.172 19:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.172 19:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:21.172 19:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:21.172 19:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:21.172 19:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:21.172 19:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.430 19:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:18:21.430 19:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:18:21.430 19:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:21.430 19:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.430 19:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:21.430 19:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:21.430 19:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:21.430 19:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:21.430 19:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.430 19:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:18:21.430 19:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:18:21.430 19:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:18:21.430 19:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:21.430 19:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:21.430 19:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:21.430 19:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:21.430 19:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:21.430 19:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:18:21.430 19:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:18:21.430 19:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.430 19:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:21.430 19:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:18:21.430 19:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.430 19:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:18:21.430 19:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:18:21.430 19:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:18:21.430 19:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:21.430 19:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:18:21.430 19:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.430 19:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:21.430 19:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.430 19:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:21.430 19:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:21.430 19:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:21.430 19:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:21.430 19:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:18:21.430 19:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:18:21.430 19:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:21.431 19:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:21.431 19:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:21.431 19:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.431 19:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:21.431 19:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:21.431 19:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.431 19:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:18:21.431 19:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:18:21.996 [2024-12-05 19:38:15.194238] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:18:21.996 [2024-12-05 19:38:15.194314] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:18:21.996 [2024-12-05 19:38:15.194350] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:18:21.996 [2024-12-05 19:38:15.200289] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:18:21.996 [2024-12-05 19:38:15.254669] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:18:21.996 [2024-12-05 19:38:15.255752] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xe46da0:1 started. 00:18:21.996 [2024-12-05 19:38:15.257848] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:18:21.996 [2024-12-05 19:38:15.257874] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:18:21.996 [2024-12-05 19:38:15.262864] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xe46da0 was disconnected and freed. delete nvme_qpair. 00:18:22.562 19:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:22.562 19:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:18:22.562 19:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:18:22.562 19:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:22.562 19:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:22.562 19:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.562 19:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:22.562 19:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:22.562 19:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:22.562 19:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.562 19:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:22.562 19:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:22.562 19:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:18:22.562 19:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:18:22.562 19:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:22.562 19:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:22.562 19:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:18:22.562 19:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:18:22.562 19:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:22.562 19:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:22.562 19:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.562 19:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:22.562 19:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:22.562 19:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:22.562 19:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.562 19:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:18:22.562 19:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:22.562 19:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:18:22.562 19:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:18:22.562 19:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:22.562 19:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:22.562 19:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:18:22.562 19:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:18:22.562 19:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:18:22.562 19:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.562 19:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:18:22.562 19:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:22.562 19:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:18:22.562 19:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:18:22.562 19:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.562 19:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:18:22.563 19:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:22.563 19:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:18:22.563 19:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:18:22.563 19:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:22.563 19:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:22.563 19:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:22.563 19:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:22.563 19:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:22.563 19:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:18:22.563 19:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:18:22.563 19:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.563 19:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:22.563 19:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:18:22.563 19:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.821 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:18:22.821 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:18:22.821 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:18:22.821 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:22.821 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:18:22.821 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.821 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:22.821 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.821 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:22.821 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:22.821 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:22.821 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:22.822 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:18:22.822 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:18:22.822 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:22.822 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:22.822 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:22.822 [2024-12-05 19:38:16.036640] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xe552a0:1 started. 00:18:22.822 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:22.822 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.822 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:22.822 [2024-12-05 19:38:16.043723] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xe552a0 was disconnected and freed. delete nvme_qpair. 00:18:22.822 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.822 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:22.822 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:22.822 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:18:22.822 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:18:22.822 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:22.822 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:22.822 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:22.822 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:22.822 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:22.822 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:18:22.822 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:18:22.822 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:18:22.822 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.822 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:22.822 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.822 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:18:22.822 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:18:22.822 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:18:22.822 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:22.822 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 00:18:22.822 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.822 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:22.822 [2024-12-05 19:38:16.151259] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:18:22.822 [2024-12-05 19:38:16.152238] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:18:22.822 [2024-12-05 19:38:16.152272] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:18:22.822 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.822 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:22.822 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:22.822 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:22.822 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:22.822 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:18:22.822 [2024-12-05 19:38:16.158247] bdev_nvme.c:7435:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new path for nvme0 00:18:22.822 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:18:22.822 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:22.822 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.822 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:22.822 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:22.822 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:22.822 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:22.822 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.822 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:22.822 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:22.822 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:22.822 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:22.822 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:22.822 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:22.822 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:18:22.822 [2024-12-05 19:38:16.217642] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4421 00:18:22.822 [2024-12-05 19:38:16.217690] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:18:22.822 [2024-12-05 19:38:16.217701] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:18:22.822 [2024-12-05 19:38:16.217708] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:18:22.822 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:18:22.822 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:22.822 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:22.822 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.822 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:22.822 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:22.822 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:23.085 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.085 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:23.085 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:23.085 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:18:23.085 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:18:23.085 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:23.085 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:23.085 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:18:23.085 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:18:23.085 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:18:23.085 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:18:23.085 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.085 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:18:23.085 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:23.085 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:18:23.085 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.085 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:18:23.085 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:23.085 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:18:23.085 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:18:23.085 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:23.085 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:23.085 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:23.085 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:23.085 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:23.085 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:18:23.085 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:18:23.085 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:18:23.085 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.085 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:23.085 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.085 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:18:23.085 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:18:23.085 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:18:23.085 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:23.085 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:18:23.085 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.085 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:23.085 [2024-12-05 19:38:16.392705] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:18:23.085 [2024-12-05 19:38:16.392741] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:18:23.085 [2024-12-05 19:38:16.394810] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:23.085 [2024-12-05 19:38:16.395018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.085 [2024-12-05 19:38:16.395171] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:23.085 [2024-12-05 19:38:16.395288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.085 [2024-12-05 19:38:16.395311] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:23.086 [2024-12-05 19:38:16.395321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.086 [2024-12-05 19:38:16.395331] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:23.086 [2024-12-05 19:38:16.395340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.086 [2024-12-05 19:38:16.395350] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe22fb0 is same with the state(6) to be set 00:18:23.086 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.086 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:23.086 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:23.086 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:23.086 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:23.086 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:18:23.086 [2024-12-05 19:38:16.398699] bdev_nvme.c:7298:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 not found 00:18:23.086 [2024-12-05 19:38:16.398729] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:18:23.086 [2024-12-05 19:38:16.398799] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe22fb0 (9): Bad file descriptor 00:18:23.086 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:18:23.086 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:23.086 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:23.086 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.086 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:23.086 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:23.086 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:23.086 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.086 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:23.086 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:23.086 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:23.086 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:23.086 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:23.086 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:23.086 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:18:23.086 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:18:23.086 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:23.086 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.086 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:23.086 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:23.086 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:23.086 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:23.086 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.086 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:23.086 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:23.086 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:18:23.086 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:18:23.086 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:23.086 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:23.086 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:18:23.086 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:18:23.086 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:18:23.086 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:18:23.086 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:18:23.086 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.086 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:23.086 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:18:23.346 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.346 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:18:23.346 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:23.346 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:18:23.346 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:18:23.346 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:23.346 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:23.346 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:23.346 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:23.346 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:23.346 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:18:23.346 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:18:23.346 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:18:23.346 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.346 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:23.346 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.346 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:18:23.346 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:18:23.346 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:18:23.346 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:23.346 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:18:23.346 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.346 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:23.346 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.346 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:18:23.346 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:18:23.346 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:23.346 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:23.346 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:18:23.346 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:18:23.346 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:23.346 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:23.346 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:23.346 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.346 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:23.346 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:23.346 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.346 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:18:23.346 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:23.346 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:18:23.346 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:18:23.346 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:23.346 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:23.346 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:18:23.346 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:18:23.346 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:23.346 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:23.346 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:23.346 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.346 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:23.346 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:23.346 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.346 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:18:23.346 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:23.346 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:18:23.346 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:18:23.346 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:23.346 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:23.346 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:23.346 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:23.346 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:23.346 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:18:23.346 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:18:23.346 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.346 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:18:23.346 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:23.346 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.606 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:18:23.606 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:18:23.606 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:18:23.606 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:23.606 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:23.606 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.606 19:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:24.544 [2024-12-05 19:38:17.824953] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:18:24.544 [2024-12-05 19:38:17.825315] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:18:24.544 [2024-12-05 19:38:17.825383] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:18:24.544 [2024-12-05 19:38:17.830999] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new subsystem nvme0 00:18:24.544 [2024-12-05 19:38:17.889542] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.3:4421 00:18:24.544 [2024-12-05 19:38:17.890536] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0xe1e220:1 started. 00:18:24.544 [2024-12-05 19:38:17.892877] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:18:24.544 [2024-12-05 19:38:17.893083] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:18:24.544 19:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.544 [2024-12-05 19:38:17.894607] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0xe1e220 was disconnected and freed. delete nvme_qpair. 00:18:24.544 19:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:24.544 19:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:18:24.544 19:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:24.544 19:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:24.544 19:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:24.544 19:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:24.544 19:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:24.544 19:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:24.544 19:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.544 19:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:24.544 request: 00:18:24.544 { 00:18:24.544 "name": "nvme", 00:18:24.544 "trtype": "tcp", 00:18:24.544 "traddr": "10.0.0.3", 00:18:24.544 "adrfam": "ipv4", 00:18:24.544 "trsvcid": "8009", 00:18:24.544 "hostnqn": "nqn.2021-12.io.spdk:test", 00:18:24.544 "wait_for_attach": true, 00:18:24.544 "method": "bdev_nvme_start_discovery", 00:18:24.544 "req_id": 1 00:18:24.544 } 00:18:24.544 Got JSON-RPC error response 00:18:24.544 response: 00:18:24.544 { 00:18:24.544 "code": -17, 00:18:24.544 "message": "File exists" 00:18:24.544 } 00:18:24.544 19:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:24.544 19:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:18:24.544 19:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:24.544 19:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:24.544 19:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:24.544 19:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:18:24.544 19:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:18:24.544 19:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.544 19:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:18:24.544 19:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:18:24.544 19:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:24.544 19:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:18:24.544 19:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.544 19:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:18:24.544 19:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:18:24.544 19:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:24.544 19:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:24.544 19:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.544 19:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:24.544 19:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:24.544 19:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:24.804 19:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.804 19:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:24.804 19:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:24.804 19:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:18:24.804 19:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:24.804 19:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:24.804 19:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:24.804 19:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:24.804 19:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:24.804 19:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:24.804 19:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.804 19:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:24.804 request: 00:18:24.804 { 00:18:24.804 "name": "nvme_second", 00:18:24.804 "trtype": "tcp", 00:18:24.804 "traddr": "10.0.0.3", 00:18:24.804 "adrfam": "ipv4", 00:18:24.804 "trsvcid": "8009", 00:18:24.804 "hostnqn": "nqn.2021-12.io.spdk:test", 00:18:24.804 "wait_for_attach": true, 00:18:24.804 "method": "bdev_nvme_start_discovery", 00:18:24.804 "req_id": 1 00:18:24.804 } 00:18:24.804 Got JSON-RPC error response 00:18:24.804 response: 00:18:24.804 { 00:18:24.804 "code": -17, 00:18:24.804 "message": "File exists" 00:18:24.804 } 00:18:24.804 19:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:24.804 19:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:18:24.804 19:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:24.804 19:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:24.804 19:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:24.804 19:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:18:24.804 19:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:18:24.804 19:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:18:24.804 19:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:18:24.804 19:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:18:24.804 19:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.804 19:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:24.804 19:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.804 19:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:18:24.804 19:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:18:24.804 19:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:24.804 19:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:24.804 19:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.804 19:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:24.804 19:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:24.804 19:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:24.804 19:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.804 19:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:24.804 19:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:18:24.804 19:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:18:24.804 19:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:18:24.804 19:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:24.804 19:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:24.804 19:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:24.804 19:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:24.804 19:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:18:24.804 19:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.804 19:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:25.742 [2024-12-05 19:38:19.177607] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:25.742 [2024-12-05 19:38:19.177694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.3, port=8010 00:18:25.742 [2024-12-05 19:38:19.177719] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:18:25.742 [2024-12-05 19:38:19.177729] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:18:25.742 [2024-12-05 19:38:19.177738] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:18:27.163 [2024-12-05 19:38:20.177597] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:27.163 [2024-12-05 19:38:20.177685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1e300 with addr=10.0.0.3, port=8010 00:18:27.163 [2024-12-05 19:38:20.177712] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:18:27.163 [2024-12-05 19:38:20.177722] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:18:27.163 [2024-12-05 19:38:20.177730] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:18:27.745 [2024-12-05 19:38:21.177461] bdev_nvme.c:7554:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] timed out while attaching discovery ctrlr 00:18:27.745 request: 00:18:27.745 { 00:18:28.004 "name": "nvme_second", 00:18:28.004 "trtype": "tcp", 00:18:28.004 "traddr": "10.0.0.3", 00:18:28.004 "adrfam": "ipv4", 00:18:28.004 "trsvcid": "8010", 00:18:28.004 "hostnqn": "nqn.2021-12.io.spdk:test", 00:18:28.004 "wait_for_attach": false, 00:18:28.004 "attach_timeout_ms": 3000, 00:18:28.004 "method": "bdev_nvme_start_discovery", 00:18:28.004 "req_id": 1 00:18:28.004 } 00:18:28.004 Got JSON-RPC error response 00:18:28.004 response: 00:18:28.004 { 00:18:28.004 "code": -110, 00:18:28.004 "message": "Connection timed out" 00:18:28.004 } 00:18:28.004 19:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:28.004 19:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:18:28.004 19:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:28.004 19:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:28.004 19:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:28.004 19:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:18:28.004 19:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:18:28.004 19:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:18:28.004 19:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.004 19:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:28.004 19:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:18:28.004 19:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:18:28.004 19:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.004 19:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:18:28.004 19:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:18:28.004 19:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 75889 00:18:28.004 19:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:18:28.004 19:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:28.004 19:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:18:28.004 19:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:28.004 19:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:18:28.004 19:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:28.004 19:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:28.004 rmmod nvme_tcp 00:18:28.004 rmmod nvme_fabrics 00:18:28.004 rmmod nvme_keyring 00:18:28.004 19:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:28.004 19:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:18:28.004 19:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:18:28.004 19:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 75859 ']' 00:18:28.004 19:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 75859 00:18:28.004 19:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 75859 ']' 00:18:28.004 19:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 75859 00:18:28.004 19:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:18:28.004 19:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:28.004 19:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75859 00:18:28.004 killing process with pid 75859 00:18:28.004 19:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:28.004 19:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:28.004 19:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75859' 00:18:28.004 19:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 75859 00:18:28.004 19:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 75859 00:18:28.266 19:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:28.266 19:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:28.266 19:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:28.266 19:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:18:28.266 19:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:18:28.266 19:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:28.266 19:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:18:28.266 19:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:28.266 19:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:28.266 19:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:28.266 19:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:28.266 19:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:28.266 19:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:28.266 19:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:28.266 19:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:28.266 19:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:28.266 19:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:28.266 19:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:28.525 19:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:28.525 19:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:28.525 19:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:28.525 19:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:28.525 19:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:28.525 19:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:28.525 19:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:28.525 19:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:28.525 19:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@300 -- # return 0 00:18:28.525 00:18:28.525 real 0m9.194s 00:18:28.525 user 0m17.298s 00:18:28.525 sys 0m2.095s 00:18:28.525 19:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:28.525 19:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:28.525 ************************************ 00:18:28.525 END TEST nvmf_host_discovery 00:18:28.525 ************************************ 00:18:28.525 19:38:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:18:28.525 19:38:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:28.525 19:38:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:28.525 19:38:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:28.525 ************************************ 00:18:28.525 START TEST nvmf_host_multipath_status 00:18:28.525 ************************************ 00:18:28.525 19:38:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:18:28.525 * Looking for test storage... 00:18:28.525 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:28.525 19:38:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:28.525 19:38:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lcov --version 00:18:28.525 19:38:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:28.785 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:28.785 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:28.785 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:28.785 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:28.785 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:18:28.785 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:18:28.785 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:18:28.785 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:18:28.785 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:18:28.785 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:18:28.785 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:18:28.785 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:28.785 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:18:28.785 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:18:28.785 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:28.785 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:28.785 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:18:28.785 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:18:28.785 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:28.785 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:18:28.785 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:18:28.785 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:18:28.785 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:18:28.785 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:28.785 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:18:28.785 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:18:28.785 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:28.785 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:28.785 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:18:28.785 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:28.785 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:28.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:28.785 --rc genhtml_branch_coverage=1 00:18:28.785 --rc genhtml_function_coverage=1 00:18:28.785 --rc genhtml_legend=1 00:18:28.785 --rc geninfo_all_blocks=1 00:18:28.785 --rc geninfo_unexecuted_blocks=1 00:18:28.785 00:18:28.785 ' 00:18:28.785 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:28.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:28.785 --rc genhtml_branch_coverage=1 00:18:28.785 --rc genhtml_function_coverage=1 00:18:28.785 --rc genhtml_legend=1 00:18:28.785 --rc geninfo_all_blocks=1 00:18:28.785 --rc geninfo_unexecuted_blocks=1 00:18:28.785 00:18:28.785 ' 00:18:28.785 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:28.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:28.785 --rc genhtml_branch_coverage=1 00:18:28.785 --rc genhtml_function_coverage=1 00:18:28.785 --rc genhtml_legend=1 00:18:28.785 --rc geninfo_all_blocks=1 00:18:28.785 --rc geninfo_unexecuted_blocks=1 00:18:28.785 00:18:28.785 ' 00:18:28.785 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:28.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:28.785 --rc genhtml_branch_coverage=1 00:18:28.785 --rc genhtml_function_coverage=1 00:18:28.785 --rc genhtml_legend=1 00:18:28.785 --rc geninfo_all_blocks=1 00:18:28.785 --rc geninfo_unexecuted_blocks=1 00:18:28.785 00:18:28.785 ' 00:18:28.785 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:28.785 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:18:28.785 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:28.785 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:28.785 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:28.785 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:28.785 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:28.785 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:28.785 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:28.785 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:28.785 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:28.785 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:28.785 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:18:28.785 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:18:28.785 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:28.785 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:28.785 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:28.786 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:28.786 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:28.786 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:18:28.786 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:28.786 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:28.786 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:28.786 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:28.786 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:28.786 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:28.786 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:18:28.786 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:28.786 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:18:28.786 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:28.786 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:28.786 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:28.786 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:28.786 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:28.786 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:28.786 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:28.786 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:28.786 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:28.786 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:28.786 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:18:28.786 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:18:28.786 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:28.786 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:18:28.786 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:28.786 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:18:28.786 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:18:28.786 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:28.786 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:28.786 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:28.786 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:28.786 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:28.786 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:28.786 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:28.786 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:28.786 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:28.786 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:28.786 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:28.786 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:28.786 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:28.786 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:28.786 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:28.786 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:28.786 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:28.786 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:28.786 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:28.786 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:28.786 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:28.786 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:28.786 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:28.786 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:28.786 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:28.786 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:28.786 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:28.786 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:28.786 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:28.786 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:28.786 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:28.786 Cannot find device "nvmf_init_br" 00:18:28.786 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:18:28.786 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:28.786 Cannot find device "nvmf_init_br2" 00:18:28.786 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:18:28.786 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:28.786 Cannot find device "nvmf_tgt_br" 00:18:28.786 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # true 00:18:28.786 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:28.786 Cannot find device "nvmf_tgt_br2" 00:18:28.786 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # true 00:18:28.786 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:28.786 Cannot find device "nvmf_init_br" 00:18:28.786 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # true 00:18:28.786 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:28.786 Cannot find device "nvmf_init_br2" 00:18:28.786 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # true 00:18:28.786 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:28.786 Cannot find device "nvmf_tgt_br" 00:18:28.786 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # true 00:18:28.786 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:28.786 Cannot find device "nvmf_tgt_br2" 00:18:28.786 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # true 00:18:28.786 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:28.786 Cannot find device "nvmf_br" 00:18:28.786 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # true 00:18:28.786 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:28.786 Cannot find device "nvmf_init_if" 00:18:28.786 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # true 00:18:28.786 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:29.046 Cannot find device "nvmf_init_if2" 00:18:29.046 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # true 00:18:29.046 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:29.046 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:29.046 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # true 00:18:29.046 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:29.046 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:29.046 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # true 00:18:29.046 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:29.046 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:29.046 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:29.046 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:29.046 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:29.046 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:29.046 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:29.046 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:29.046 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:29.046 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:29.046 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:29.046 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:29.046 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:29.046 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:29.046 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:29.046 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:29.046 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:29.046 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:29.046 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:29.046 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:29.046 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:29.046 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:29.046 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:29.046 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:29.046 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:29.046 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:29.046 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:29.046 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:29.046 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:29.046 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:29.046 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:29.046 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:29.046 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:29.046 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:29.046 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:18:29.046 00:18:29.046 --- 10.0.0.3 ping statistics --- 00:18:29.046 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:29.046 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:18:29.046 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:29.046 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:29.046 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.044 ms 00:18:29.046 00:18:29.046 --- 10.0.0.4 ping statistics --- 00:18:29.046 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:29.046 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:18:29.046 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:29.046 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:29.046 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:18:29.046 00:18:29.046 --- 10.0.0.1 ping statistics --- 00:18:29.046 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:29.046 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:18:29.046 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:29.046 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:29.046 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.045 ms 00:18:29.046 00:18:29.046 --- 10.0.0.2 ping statistics --- 00:18:29.046 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:29.046 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:18:29.046 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:29.046 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@461 -- # return 0 00:18:29.046 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:29.046 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:29.046 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:29.046 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:29.046 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:29.046 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:29.046 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:29.306 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:18:29.306 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:29.306 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:29.306 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:18:29.306 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=76383 00:18:29.306 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:18:29.306 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 76383 00:18:29.306 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 76383 ']' 00:18:29.306 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:29.306 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:29.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:29.306 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:29.306 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:29.306 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:18:29.306 [2024-12-05 19:38:22.559684] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:18:29.306 [2024-12-05 19:38:22.560389] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:29.306 [2024-12-05 19:38:22.717974] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:29.566 [2024-12-05 19:38:22.779540] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:29.566 [2024-12-05 19:38:22.779616] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:29.566 [2024-12-05 19:38:22.779640] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:29.566 [2024-12-05 19:38:22.779651] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:29.566 [2024-12-05 19:38:22.779660] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:29.566 [2024-12-05 19:38:22.784170] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:29.566 [2024-12-05 19:38:22.784191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:29.566 [2024-12-05 19:38:22.846256] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:29.566 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:29.566 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:18:29.566 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:29.566 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:29.566 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:18:29.566 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:29.566 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=76383 00:18:29.566 19:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:29.825 [2024-12-05 19:38:23.240076] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:29.825 19:38:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:18:30.084 Malloc0 00:18:30.343 19:38:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:18:30.603 19:38:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:30.862 19:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:31.122 [2024-12-05 19:38:24.387296] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:31.122 19:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:18:31.381 [2024-12-05 19:38:24.631433] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:18:31.381 19:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=76431 00:18:31.381 19:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:18:31.381 19:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:31.381 19:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 76431 /var/tmp/bdevperf.sock 00:18:31.381 19:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 76431 ']' 00:18:31.381 19:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:31.381 19:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:31.381 19:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:31.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:31.381 19:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:31.381 19:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:18:31.641 19:38:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:31.642 19:38:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:18:31.642 19:38:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:18:31.899 19:38:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:18:32.467 Nvme0n1 00:18:32.467 19:38:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:18:32.729 Nvme0n1 00:18:32.729 19:38:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:18:32.729 19:38:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:18:34.632 19:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:18:34.632 19:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:18:35.200 19:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:18:35.200 19:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:18:36.582 19:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:18:36.582 19:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:18:36.582 19:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:36.582 19:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:36.582 19:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:36.582 19:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:18:36.582 19:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:36.583 19:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:36.854 19:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:36.854 19:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:36.854 19:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:36.854 19:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:37.151 19:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:37.151 19:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:37.151 19:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:37.151 19:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:37.410 19:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:37.410 19:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:37.410 19:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:37.410 19:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:37.671 19:38:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:37.671 19:38:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:18:37.671 19:38:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:37.671 19:38:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:37.931 19:38:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:37.931 19:38:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:18:37.931 19:38:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:18:38.190 19:38:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:18:38.449 19:38:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:18:39.826 19:38:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:18:39.826 19:38:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:18:39.826 19:38:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:39.826 19:38:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:39.826 19:38:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:39.826 19:38:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:18:39.826 19:38:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:39.826 19:38:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:40.085 19:38:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:40.085 19:38:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:40.085 19:38:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:40.085 19:38:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:40.343 19:38:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:40.343 19:38:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:40.343 19:38:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:40.343 19:38:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:40.602 19:38:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:40.602 19:38:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:40.602 19:38:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:40.602 19:38:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:40.860 19:38:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:40.860 19:38:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:18:40.860 19:38:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:40.860 19:38:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:41.119 19:38:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:41.119 19:38:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:18:41.119 19:38:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:18:41.734 19:38:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:18:41.734 19:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:18:42.670 19:38:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:18:42.670 19:38:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:18:42.670 19:38:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:42.929 19:38:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:43.188 19:38:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:43.188 19:38:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:18:43.188 19:38:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:43.188 19:38:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:43.447 19:38:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:43.447 19:38:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:43.447 19:38:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:43.447 19:38:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:43.704 19:38:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:43.704 19:38:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:43.704 19:38:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:43.704 19:38:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:43.962 19:38:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:43.962 19:38:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:43.962 19:38:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:43.962 19:38:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:44.220 19:38:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:44.220 19:38:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:18:44.220 19:38:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:44.220 19:38:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:44.478 19:38:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:44.478 19:38:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:18:44.478 19:38:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:18:44.736 19:38:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:18:44.994 19:38:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:18:46.426 19:38:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:18:46.426 19:38:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:18:46.426 19:38:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:46.426 19:38:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:46.426 19:38:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:46.426 19:38:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:18:46.426 19:38:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:46.426 19:38:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:46.684 19:38:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:46.684 19:38:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:46.684 19:38:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:46.684 19:38:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:46.943 19:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:46.943 19:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:46.943 19:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:46.943 19:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:47.202 19:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:47.202 19:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:47.202 19:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:47.202 19:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:47.461 19:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:47.461 19:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:18:47.461 19:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:47.461 19:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:48.028 19:38:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:48.028 19:38:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:18:48.028 19:38:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:18:48.028 19:38:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:18:48.595 19:38:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:18:49.532 19:38:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:18:49.532 19:38:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:18:49.532 19:38:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:49.532 19:38:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:49.792 19:38:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:49.792 19:38:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:18:49.792 19:38:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:49.792 19:38:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:50.051 19:38:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:50.051 19:38:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:50.051 19:38:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:50.051 19:38:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:50.309 19:38:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:50.309 19:38:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:50.309 19:38:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:50.309 19:38:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:50.568 19:38:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:50.568 19:38:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:18:50.568 19:38:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:50.568 19:38:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:50.827 19:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:50.827 19:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:18:51.086 19:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:51.086 19:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:51.345 19:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:51.345 19:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:18:51.345 19:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:18:51.603 19:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:18:51.867 19:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:18:52.805 19:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:18:52.805 19:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:18:52.805 19:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:52.805 19:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:53.064 19:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:53.064 19:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:18:53.064 19:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:53.064 19:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:53.323 19:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:53.323 19:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:53.323 19:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:53.323 19:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:53.582 19:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:53.582 19:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:53.582 19:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:53.582 19:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:53.842 19:38:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:53.842 19:38:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:18:53.842 19:38:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:53.842 19:38:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:54.101 19:38:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:54.101 19:38:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:18:54.101 19:38:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:54.101 19:38:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:54.360 19:38:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:54.360 19:38:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:18:54.928 19:38:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:18:54.928 19:38:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:18:55.186 19:38:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:18:55.186 19:38:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:18:56.564 19:38:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:18:56.564 19:38:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:18:56.564 19:38:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:56.564 19:38:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:56.564 19:38:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:56.564 19:38:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:18:56.564 19:38:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:56.564 19:38:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:56.823 19:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:56.823 19:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:56.823 19:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:56.823 19:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:57.390 19:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:57.390 19:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:57.390 19:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:57.390 19:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:57.649 19:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:57.649 19:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:57.649 19:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:57.649 19:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:57.908 19:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:57.908 19:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:18:57.908 19:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:57.908 19:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:58.167 19:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:58.167 19:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:18:58.167 19:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:18:58.426 19:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:18:58.685 19:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:18:59.626 19:38:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:18:59.626 19:38:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:18:59.626 19:38:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:59.626 19:38:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:59.886 19:38:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:59.886 19:38:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:18:59.886 19:38:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:59.886 19:38:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:00.453 19:38:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:00.453 19:38:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:00.453 19:38:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:00.453 19:38:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:00.711 19:38:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:00.711 19:38:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:00.712 19:38:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:00.712 19:38:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:00.970 19:38:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:00.970 19:38:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:00.970 19:38:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:00.970 19:38:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:01.229 19:38:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:01.229 19:38:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:01.229 19:38:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:01.229 19:38:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:01.489 19:38:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:01.489 19:38:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:19:01.489 19:38:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:19:01.748 19:38:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:19:02.316 19:38:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:19:03.253 19:38:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:19:03.253 19:38:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:03.253 19:38:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:03.253 19:38:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:03.512 19:38:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:03.512 19:38:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:19:03.512 19:38:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:03.512 19:38:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:03.771 19:38:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:03.771 19:38:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:03.771 19:38:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:03.771 19:38:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:04.030 19:38:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:04.030 19:38:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:04.030 19:38:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:04.030 19:38:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:04.597 19:38:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:04.597 19:38:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:04.597 19:38:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:04.597 19:38:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:04.597 19:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:04.597 19:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:04.597 19:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:04.597 19:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:04.855 19:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:04.855 19:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:19:04.855 19:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:19:05.422 19:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:19:05.679 19:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:19:06.614 19:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:19:06.614 19:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:06.614 19:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:06.614 19:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:06.891 19:39:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:06.891 19:39:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:19:06.891 19:39:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:06.891 19:39:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:07.148 19:39:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:07.148 19:39:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:07.148 19:39:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:07.148 19:39:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:07.407 19:39:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:07.407 19:39:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:07.407 19:39:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:07.407 19:39:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:07.666 19:39:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:07.666 19:39:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:07.666 19:39:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:07.666 19:39:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:08.232 19:39:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:08.232 19:39:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:19:08.232 19:39:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:08.232 19:39:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:08.232 19:39:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:08.232 19:39:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 76431 00:19:08.232 19:39:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 76431 ']' 00:19:08.233 19:39:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 76431 00:19:08.233 19:39:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:19:08.233 19:39:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:08.233 19:39:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76431 00:19:08.495 19:39:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:08.496 killing process with pid 76431 00:19:08.496 19:39:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:08.496 19:39:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76431' 00:19:08.496 19:39:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 76431 00:19:08.496 19:39:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 76431 00:19:08.496 { 00:19:08.496 "results": [ 00:19:08.496 { 00:19:08.496 "job": "Nvme0n1", 00:19:08.496 "core_mask": "0x4", 00:19:08.496 "workload": "verify", 00:19:08.496 "status": "terminated", 00:19:08.496 "verify_range": { 00:19:08.496 "start": 0, 00:19:08.496 "length": 16384 00:19:08.496 }, 00:19:08.496 "queue_depth": 128, 00:19:08.496 "io_size": 4096, 00:19:08.496 "runtime": 35.524886, 00:19:08.496 "iops": 8933.624727184206, 00:19:08.496 "mibps": 34.896971590563304, 00:19:08.496 "io_failed": 0, 00:19:08.496 "io_timeout": 0, 00:19:08.496 "avg_latency_us": 14296.781119957284, 00:19:08.496 "min_latency_us": 174.08, 00:19:08.496 "max_latency_us": 4026531.84 00:19:08.496 } 00:19:08.496 ], 00:19:08.496 "core_count": 1 00:19:08.496 } 00:19:08.496 19:39:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 76431 00:19:08.496 19:39:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:08.496 [2024-12-05 19:38:24.706475] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:19:08.496 [2024-12-05 19:38:24.706601] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76431 ] 00:19:08.496 [2024-12-05 19:38:24.855338] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:08.496 [2024-12-05 19:38:24.923255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:08.496 [2024-12-05 19:38:24.981896] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:08.496 Running I/O for 90 seconds... 00:19:08.496 8100.00 IOPS, 31.64 MiB/s [2024-12-05T19:39:01.933Z] 8552.00 IOPS, 33.41 MiB/s [2024-12-05T19:39:01.933Z] 8901.33 IOPS, 34.77 MiB/s [2024-12-05T19:39:01.933Z] 9156.00 IOPS, 35.77 MiB/s [2024-12-05T19:39:01.933Z] 9205.80 IOPS, 35.96 MiB/s [2024-12-05T19:39:01.933Z] 9202.00 IOPS, 35.95 MiB/s [2024-12-05T19:39:01.933Z] 9216.57 IOPS, 36.00 MiB/s [2024-12-05T19:39:01.933Z] 9261.50 IOPS, 36.18 MiB/s [2024-12-05T19:39:01.933Z] 9268.78 IOPS, 36.21 MiB/s [2024-12-05T19:39:01.933Z] 9302.70 IOPS, 36.34 MiB/s [2024-12-05T19:39:01.933Z] 9324.64 IOPS, 36.42 MiB/s [2024-12-05T19:39:01.933Z] 9362.25 IOPS, 36.57 MiB/s [2024-12-05T19:39:01.933Z] 9396.54 IOPS, 36.71 MiB/s [2024-12-05T19:39:01.933Z] 9413.93 IOPS, 36.77 MiB/s [2024-12-05T19:39:01.933Z] 9408.73 IOPS, 36.75 MiB/s [2024-12-05T19:39:01.933Z] [2024-12-05 19:38:41.412827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:99992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.496 [2024-12-05 19:38:41.412899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:08.496 [2024-12-05 19:38:41.412972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:100000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.496 [2024-12-05 19:38:41.412992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:08.496 [2024-12-05 19:38:41.413014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:100008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.496 [2024-12-05 19:38:41.413029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:08.496 [2024-12-05 19:38:41.413050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:100016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.496 [2024-12-05 19:38:41.413064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:08.496 [2024-12-05 19:38:41.413085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:100024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.496 [2024-12-05 19:38:41.413099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:08.496 [2024-12-05 19:38:41.413119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:100032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.496 [2024-12-05 19:38:41.413148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:08.496 [2024-12-05 19:38:41.413171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:100040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.496 [2024-12-05 19:38:41.413187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:08.496 [2024-12-05 19:38:41.413207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:100048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.496 [2024-12-05 19:38:41.413221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:08.496 [2024-12-05 19:38:41.413242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:99544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.496 [2024-12-05 19:38:41.413256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:08.496 [2024-12-05 19:38:41.413308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:99552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.496 [2024-12-05 19:38:41.413325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:08.496 [2024-12-05 19:38:41.413346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:99560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.496 [2024-12-05 19:38:41.413360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:08.496 [2024-12-05 19:38:41.413380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:99568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.496 [2024-12-05 19:38:41.413395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:08.496 [2024-12-05 19:38:41.413415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:99576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.496 [2024-12-05 19:38:41.413429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:08.496 [2024-12-05 19:38:41.413449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:99584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.496 [2024-12-05 19:38:41.413464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:08.496 [2024-12-05 19:38:41.413484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:99592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.496 [2024-12-05 19:38:41.413498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:08.496 [2024-12-05 19:38:41.413518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:99600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.496 [2024-12-05 19:38:41.413533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:08.496 [2024-12-05 19:38:41.413553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:99608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.496 [2024-12-05 19:38:41.413567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:08.496 [2024-12-05 19:38:41.413588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:99616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.496 [2024-12-05 19:38:41.413604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:08.496 [2024-12-05 19:38:41.413625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:99624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.496 [2024-12-05 19:38:41.413639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:08.496 [2024-12-05 19:38:41.413660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:99632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.496 [2024-12-05 19:38:41.413674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:08.496 [2024-12-05 19:38:41.413694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.496 [2024-12-05 19:38:41.413708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:08.496 [2024-12-05 19:38:41.413739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:99648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.496 [2024-12-05 19:38:41.413755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:08.496 [2024-12-05 19:38:41.413775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:99656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.496 [2024-12-05 19:38:41.413791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:08.497 [2024-12-05 19:38:41.413812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:99664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.497 [2024-12-05 19:38:41.413827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:08.497 [2024-12-05 19:38:41.413936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:100056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.497 [2024-12-05 19:38:41.413959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:08.497 [2024-12-05 19:38:41.413981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:100064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.497 [2024-12-05 19:38:41.413997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:08.497 [2024-12-05 19:38:41.414017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:100072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.497 [2024-12-05 19:38:41.414031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:08.497 [2024-12-05 19:38:41.414052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:100080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.497 [2024-12-05 19:38:41.414066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:08.497 [2024-12-05 19:38:41.414086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:100088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.497 [2024-12-05 19:38:41.414101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:08.497 [2024-12-05 19:38:41.414147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:100096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.497 [2024-12-05 19:38:41.414164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:08.497 [2024-12-05 19:38:41.414184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:100104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.497 [2024-12-05 19:38:41.414199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:08.497 [2024-12-05 19:38:41.414219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:100112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.497 [2024-12-05 19:38:41.414234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:08.497 [2024-12-05 19:38:41.414254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:99672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.497 [2024-12-05 19:38:41.414269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:08.497 [2024-12-05 19:38:41.414301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:99680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.497 [2024-12-05 19:38:41.414318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:08.497 [2024-12-05 19:38:41.414339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.497 [2024-12-05 19:38:41.414354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:08.497 [2024-12-05 19:38:41.414374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.497 [2024-12-05 19:38:41.414389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:08.497 [2024-12-05 19:38:41.414409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:99704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.497 [2024-12-05 19:38:41.414424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:08.497 [2024-12-05 19:38:41.414444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:99712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.497 [2024-12-05 19:38:41.414459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:08.497 [2024-12-05 19:38:41.414487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:99720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.497 [2024-12-05 19:38:41.414501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.497 [2024-12-05 19:38:41.414522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:99728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.497 [2024-12-05 19:38:41.414537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:08.497 [2024-12-05 19:38:41.414558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:99736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.497 [2024-12-05 19:38:41.414572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:08.497 [2024-12-05 19:38:41.414592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:99744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.497 [2024-12-05 19:38:41.414606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:08.497 [2024-12-05 19:38:41.414627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:99752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.497 [2024-12-05 19:38:41.414642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:08.497 [2024-12-05 19:38:41.414662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:99760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.497 [2024-12-05 19:38:41.414676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:08.497 [2024-12-05 19:38:41.414696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:99768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.497 [2024-12-05 19:38:41.414711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:08.497 [2024-12-05 19:38:41.414731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:99776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.497 [2024-12-05 19:38:41.414754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:08.497 [2024-12-05 19:38:41.414775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:99784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.497 [2024-12-05 19:38:41.414807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:08.497 [2024-12-05 19:38:41.414829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:99792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.497 [2024-12-05 19:38:41.414844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:08.497 [2024-12-05 19:38:41.414865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:99800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.497 [2024-12-05 19:38:41.414880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:08.497 [2024-12-05 19:38:41.414902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:99808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.497 [2024-12-05 19:38:41.414918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:08.497 [2024-12-05 19:38:41.414939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:99816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.497 [2024-12-05 19:38:41.414955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:08.497 [2024-12-05 19:38:41.414976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:99824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.497 [2024-12-05 19:38:41.414991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:08.497 [2024-12-05 19:38:41.415012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:99832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.497 [2024-12-05 19:38:41.415027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:08.497 [2024-12-05 19:38:41.415048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:99840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.497 [2024-12-05 19:38:41.415063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:08.497 [2024-12-05 19:38:41.415084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:99848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.497 [2024-12-05 19:38:41.415099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:08.497 [2024-12-05 19:38:41.415119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:99856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.497 [2024-12-05 19:38:41.415147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:08.497 [2024-12-05 19:38:41.415171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:100120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.498 [2024-12-05 19:38:41.415201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:08.498 [2024-12-05 19:38:41.415221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:100128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.498 [2024-12-05 19:38:41.415244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:08.498 [2024-12-05 19:38:41.415266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:100136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.498 [2024-12-05 19:38:41.415281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:08.498 [2024-12-05 19:38:41.415302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:100144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.498 [2024-12-05 19:38:41.415316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:08.498 [2024-12-05 19:38:41.415336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:100152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.498 [2024-12-05 19:38:41.415351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:08.498 [2024-12-05 19:38:41.415371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:100160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.498 [2024-12-05 19:38:41.415386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:08.498 [2024-12-05 19:38:41.415406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:100168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.498 [2024-12-05 19:38:41.415421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:08.498 [2024-12-05 19:38:41.415441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:100176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.498 [2024-12-05 19:38:41.415455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:08.498 [2024-12-05 19:38:41.415476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:100184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.498 [2024-12-05 19:38:41.415490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:08.498 [2024-12-05 19:38:41.415512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:100192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.498 [2024-12-05 19:38:41.415527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:08.498 [2024-12-05 19:38:41.415548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:100200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.498 [2024-12-05 19:38:41.415563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:08.498 [2024-12-05 19:38:41.415584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:100208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.498 [2024-12-05 19:38:41.415599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:08.498 [2024-12-05 19:38:41.415619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:100216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.498 [2024-12-05 19:38:41.415634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:08.498 [2024-12-05 19:38:41.415655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:100224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.498 [2024-12-05 19:38:41.415670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:08.498 [2024-12-05 19:38:41.415698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:100232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.498 [2024-12-05 19:38:41.415714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:08.498 [2024-12-05 19:38:41.415734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:100240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.498 [2024-12-05 19:38:41.415749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:08.498 [2024-12-05 19:38:41.415770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:99864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.498 [2024-12-05 19:38:41.415785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:08.498 [2024-12-05 19:38:41.415816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:99872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.498 [2024-12-05 19:38:41.415851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:08.498 [2024-12-05 19:38:41.415874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:99880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.498 [2024-12-05 19:38:41.415890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:08.498 [2024-12-05 19:38:41.415912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.498 [2024-12-05 19:38:41.415927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:08.498 [2024-12-05 19:38:41.415949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:99896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.498 [2024-12-05 19:38:41.415965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:08.498 [2024-12-05 19:38:41.415986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:99904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.498 [2024-12-05 19:38:41.416007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:08.498 [2024-12-05 19:38:41.416029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:99912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.498 [2024-12-05 19:38:41.416044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:08.498 [2024-12-05 19:38:41.416066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:99920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.498 [2024-12-05 19:38:41.416082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:08.498 [2024-12-05 19:38:41.416122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:100248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.498 [2024-12-05 19:38:41.416206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:08.498 [2024-12-05 19:38:41.416234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:100256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.498 [2024-12-05 19:38:41.416250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:08.498 [2024-12-05 19:38:41.416282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:100264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.498 [2024-12-05 19:38:41.416298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:08.499 [2024-12-05 19:38:41.416336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:100272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.499 [2024-12-05 19:38:41.416352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:08.499 [2024-12-05 19:38:41.416374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:100280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.499 [2024-12-05 19:38:41.416390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:08.499 [2024-12-05 19:38:41.416411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:100288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.499 [2024-12-05 19:38:41.416427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:08.499 [2024-12-05 19:38:41.416448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:100296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.499 [2024-12-05 19:38:41.416463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:08.499 [2024-12-05 19:38:41.416485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:100304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.499 [2024-12-05 19:38:41.416500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:08.499 [2024-12-05 19:38:41.416523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:100312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.499 [2024-12-05 19:38:41.416538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:08.499 [2024-12-05 19:38:41.416560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:100320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.499 [2024-12-05 19:38:41.416576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:08.499 [2024-12-05 19:38:41.416597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:100328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.499 [2024-12-05 19:38:41.416612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:08.499 [2024-12-05 19:38:41.416649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:100336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.499 [2024-12-05 19:38:41.416664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:08.499 [2024-12-05 19:38:41.416685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:100344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.499 [2024-12-05 19:38:41.416700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:08.499 [2024-12-05 19:38:41.416721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:100352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.499 [2024-12-05 19:38:41.416735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:08.499 [2024-12-05 19:38:41.416756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:100360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.499 [2024-12-05 19:38:41.416779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:08.499 [2024-12-05 19:38:41.416801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:100368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.499 [2024-12-05 19:38:41.416816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:08.499 [2024-12-05 19:38:41.416838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:100376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.499 [2024-12-05 19:38:41.416853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:08.499 [2024-12-05 19:38:41.416874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:100384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.499 [2024-12-05 19:38:41.416897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:08.499 [2024-12-05 19:38:41.416919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:100392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.499 [2024-12-05 19:38:41.416934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:08.499 [2024-12-05 19:38:41.416955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:100400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.499 [2024-12-05 19:38:41.416970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:08.499 [2024-12-05 19:38:41.416991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:99928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.499 [2024-12-05 19:38:41.417006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:08.499 [2024-12-05 19:38:41.417027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:99936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.499 [2024-12-05 19:38:41.417042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:08.499 [2024-12-05 19:38:41.417063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.499 [2024-12-05 19:38:41.417078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:08.499 [2024-12-05 19:38:41.417099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:99952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.499 [2024-12-05 19:38:41.417114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:08.499 [2024-12-05 19:38:41.417135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:99960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.499 [2024-12-05 19:38:41.417151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:08.499 [2024-12-05 19:38:41.417172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:99968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.499 [2024-12-05 19:38:41.417199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:08.499 [2024-12-05 19:38:41.417222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:99976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.499 [2024-12-05 19:38:41.417247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:08.499 [2024-12-05 19:38:41.417976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:99984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.499 [2024-12-05 19:38:41.418021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:08.499 [2024-12-05 19:38:41.418055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:100408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.499 [2024-12-05 19:38:41.418072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:08.499 [2024-12-05 19:38:41.418100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:100416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.499 [2024-12-05 19:38:41.418116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:08.500 [2024-12-05 19:38:41.418159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:100424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.500 [2024-12-05 19:38:41.418177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:08.500 [2024-12-05 19:38:41.418205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:100432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.500 [2024-12-05 19:38:41.418221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:08.500 [2024-12-05 19:38:41.418249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:100440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.500 [2024-12-05 19:38:41.418265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:08.500 [2024-12-05 19:38:41.418293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:100448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.500 [2024-12-05 19:38:41.418315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:08.500 [2024-12-05 19:38:41.418345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:100456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.500 [2024-12-05 19:38:41.418361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:08.500 [2024-12-05 19:38:41.418405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:100464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.500 [2024-12-05 19:38:41.418425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:08.500 [2024-12-05 19:38:41.418454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:100472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.500 [2024-12-05 19:38:41.418470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:08.500 [2024-12-05 19:38:41.418513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:100480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.500 [2024-12-05 19:38:41.418528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:08.500 [2024-12-05 19:38:41.418555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:100488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.500 [2024-12-05 19:38:41.418585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:08.500 [2024-12-05 19:38:41.418616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:100496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.500 [2024-12-05 19:38:41.418632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:08.500 [2024-12-05 19:38:41.418659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:100504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.500 [2024-12-05 19:38:41.418675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:08.500 [2024-12-05 19:38:41.418708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:100512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.500 [2024-12-05 19:38:41.418723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:08.500 [2024-12-05 19:38:41.418751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:100520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.500 [2024-12-05 19:38:41.418766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:08.500 [2024-12-05 19:38:41.418813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:100528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.500 [2024-12-05 19:38:41.418829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:08.500 [2024-12-05 19:38:41.418858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:100536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.500 [2024-12-05 19:38:41.418873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:08.500 [2024-12-05 19:38:41.418901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:100544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.500 [2024-12-05 19:38:41.418917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:08.500 8969.69 IOPS, 35.04 MiB/s [2024-12-05T19:39:01.937Z] 8442.06 IOPS, 32.98 MiB/s [2024-12-05T19:39:01.937Z] 7973.06 IOPS, 31.14 MiB/s [2024-12-05T19:39:01.937Z] 7553.42 IOPS, 29.51 MiB/s [2024-12-05T19:39:01.937Z] 7534.15 IOPS, 29.43 MiB/s [2024-12-05T19:39:01.937Z] 7629.48 IOPS, 29.80 MiB/s [2024-12-05T19:39:01.937Z] 7705.95 IOPS, 30.10 MiB/s [2024-12-05T19:39:01.937Z] 7910.57 IOPS, 30.90 MiB/s [2024-12-05T19:39:01.937Z] 8083.83 IOPS, 31.58 MiB/s [2024-12-05T19:39:01.937Z] 8263.56 IOPS, 32.28 MiB/s [2024-12-05T19:39:01.937Z] 8380.54 IOPS, 32.74 MiB/s [2024-12-05T19:39:01.937Z] 8415.93 IOPS, 32.87 MiB/s [2024-12-05T19:39:01.937Z] 8444.79 IOPS, 32.99 MiB/s [2024-12-05T19:39:01.937Z] 8475.76 IOPS, 33.11 MiB/s [2024-12-05T19:39:01.937Z] 8580.03 IOPS, 33.52 MiB/s [2024-12-05T19:39:01.937Z] 8702.45 IOPS, 33.99 MiB/s [2024-12-05T19:39:01.937Z] 8821.47 IOPS, 34.46 MiB/s [2024-12-05T19:39:01.937Z] [2024-12-05 19:38:58.849428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:100472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.500 [2024-12-05 19:38:58.849497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:08.500 [2024-12-05 19:38:58.849565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:100488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.500 [2024-12-05 19:38:58.849586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:08.500 [2024-12-05 19:38:58.849747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:100504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.500 [2024-12-05 19:38:58.849769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:08.500 [2024-12-05 19:38:58.849842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:100520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.500 [2024-12-05 19:38:58.849859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:08.500 [2024-12-05 19:38:58.849879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:100048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.500 [2024-12-05 19:38:58.849893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:08.500 [2024-12-05 19:38:58.849913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:100080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.500 [2024-12-05 19:38:58.849927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:08.500 [2024-12-05 19:38:58.849947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:100120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.500 [2024-12-05 19:38:58.849961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:08.500 [2024-12-05 19:38:58.849981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:100528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.501 [2024-12-05 19:38:58.849994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:08.501 [2024-12-05 19:38:58.850014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:100544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.501 [2024-12-05 19:38:58.850028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:08.501 [2024-12-05 19:38:58.850047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:100168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.501 [2024-12-05 19:38:58.850061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:08.501 [2024-12-05 19:38:58.850081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:100200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.501 [2024-12-05 19:38:58.850094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:08.501 [2024-12-05 19:38:58.850114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:100560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.501 [2024-12-05 19:38:58.850141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:08.501 [2024-12-05 19:38:58.850164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:100576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.501 [2024-12-05 19:38:58.850211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:08.501 [2024-12-05 19:38:58.850232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:100592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.501 [2024-12-05 19:38:58.850247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:08.501 [2024-12-05 19:38:58.850268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:100608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.501 [2024-12-05 19:38:58.850282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:08.501 [2024-12-05 19:38:58.850305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:100624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.501 [2024-12-05 19:38:58.850331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:08.501 [2024-12-05 19:38:58.850354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:100640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.501 [2024-12-05 19:38:58.850371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:08.501 [2024-12-05 19:38:58.850392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:100656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.501 [2024-12-05 19:38:58.850408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:08.501 [2024-12-05 19:38:58.850429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:100672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.501 [2024-12-05 19:38:58.850444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:08.501 [2024-12-05 19:38:58.850465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:100688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.501 [2024-12-05 19:38:58.850480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:08.501 [2024-12-05 19:38:58.850501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:100040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.501 [2024-12-05 19:38:58.850516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:08.501 [2024-12-05 19:38:58.850537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:100072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.501 [2024-12-05 19:38:58.850553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:08.501 [2024-12-05 19:38:58.850574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:100104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.501 [2024-12-05 19:38:58.850589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:08.501 [2024-12-05 19:38:58.850624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:100696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.501 [2024-12-05 19:38:58.850639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:08.501 [2024-12-05 19:38:58.850660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:100128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.501 [2024-12-05 19:38:58.850674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:08.501 [2024-12-05 19:38:58.850695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:100160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.501 [2024-12-05 19:38:58.850709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:08.501 [2024-12-05 19:38:58.850730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:100192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.501 [2024-12-05 19:38:58.850745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:08.501 [2024-12-05 19:38:58.850765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:100216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.501 [2024-12-05 19:38:58.850793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:08.501 [2024-12-05 19:38:58.850814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:100712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.501 [2024-12-05 19:38:58.850829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:08.501 [2024-12-05 19:38:58.850850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:100728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.502 [2024-12-05 19:38:58.850864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:08.502 [2024-12-05 19:38:58.850886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:100224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.502 [2024-12-05 19:38:58.850901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:08.502 [2024-12-05 19:38:58.850922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:100256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.502 [2024-12-05 19:38:58.850936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:08.502 [2024-12-05 19:38:58.850957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:100744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.502 [2024-12-05 19:38:58.850973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:08.502 [2024-12-05 19:38:58.850994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:100760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.502 [2024-12-05 19:38:58.851008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:08.502 [2024-12-05 19:38:58.851029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:100272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.502 [2024-12-05 19:38:58.851044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:08.502 [2024-12-05 19:38:58.851064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:100304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.502 [2024-12-05 19:38:58.851082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:08.502 [2024-12-05 19:38:58.851103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:100336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.502 [2024-12-05 19:38:58.851117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:08.502 [2024-12-05 19:38:58.851138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:100264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.502 [2024-12-05 19:38:58.851167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:08.502 [2024-12-05 19:38:58.851192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:100296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.502 [2024-12-05 19:38:58.851207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:08.502 [2024-12-05 19:38:58.851228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:100328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.502 [2024-12-05 19:38:58.851242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:08.502 [2024-12-05 19:38:58.851272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:100368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.502 [2024-12-05 19:38:58.851288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:08.502 [2024-12-05 19:38:58.851309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:100784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.502 [2024-12-05 19:38:58.851323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:08.502 [2024-12-05 19:38:58.851344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:100800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.502 [2024-12-05 19:38:58.851358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:08.502 [2024-12-05 19:38:58.851379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:100816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.502 [2024-12-05 19:38:58.851393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:08.502 [2024-12-05 19:38:58.851414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:100384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.502 [2024-12-05 19:38:58.851429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:08.502 [2024-12-05 19:38:58.851458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:100832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.502 [2024-12-05 19:38:58.851472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:08.502 [2024-12-05 19:38:58.851493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:100848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.502 [2024-12-05 19:38:58.851508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:08.502 [2024-12-05 19:38:58.851529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:100864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.502 [2024-12-05 19:38:58.851545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:08.502 [2024-12-05 19:38:58.852644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:100400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.502 [2024-12-05 19:38:58.852675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:08.502 [2024-12-05 19:38:58.852703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:100432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.502 [2024-12-05 19:38:58.852720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:08.502 [2024-12-05 19:38:58.852741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:100464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.502 [2024-12-05 19:38:58.852757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:08.502 [2024-12-05 19:38:58.852778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:100496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.502 [2024-12-05 19:38:58.852793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:08.502 [2024-12-05 19:38:58.852827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:100880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.502 [2024-12-05 19:38:58.852843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:08.502 [2024-12-05 19:38:58.852864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:100896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.502 [2024-12-05 19:38:58.852879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:08.502 [2024-12-05 19:38:58.852900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:100912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.502 [2024-12-05 19:38:58.852915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:08.503 [2024-12-05 19:38:58.852936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:100928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.503 [2024-12-05 19:38:58.852951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:08.503 [2024-12-05 19:38:58.852971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:100944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.503 [2024-12-05 19:38:58.852986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:08.503 [2024-12-05 19:38:58.853007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:100512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.503 [2024-12-05 19:38:58.853022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:08.503 [2024-12-05 19:38:58.853042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:100960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.503 [2024-12-05 19:38:58.853057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:08.503 [2024-12-05 19:38:58.853078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:100976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.503 [2024-12-05 19:38:58.853094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:08.503 [2024-12-05 19:38:58.853145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:100992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.503 [2024-12-05 19:38:58.853166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:08.503 [2024-12-05 19:38:58.853188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:101008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.503 [2024-12-05 19:38:58.853204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:08.503 [2024-12-05 19:38:58.853224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:101024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.503 [2024-12-05 19:38:58.853239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:08.503 [2024-12-05 19:38:58.853260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:101040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.503 [2024-12-05 19:38:58.853276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:08.503 [2024-12-05 19:38:58.853307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:101056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.503 [2024-12-05 19:38:58.853323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:08.503 [2024-12-05 19:38:58.853352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:101072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:08.503 [2024-12-05 19:38:58.853368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:08.503 8911.94 IOPS, 34.81 MiB/s [2024-12-05T19:39:01.940Z] 8925.82 IOPS, 34.87 MiB/s [2024-12-05T19:39:01.940Z] 8932.06 IOPS, 34.89 MiB/s [2024-12-05T19:39:01.940Z] Received shutdown signal, test time was about 35.525735 seconds 00:19:08.503 00:19:08.503 Latency(us) 00:19:08.503 [2024-12-05T19:39:01.940Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:08.503 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:08.503 Verification LBA range: start 0x0 length 0x4000 00:19:08.503 Nvme0n1 : 35.52 8933.62 34.90 0.00 0.00 14296.78 174.08 4026531.84 00:19:08.503 [2024-12-05T19:39:01.940Z] =================================================================================================================== 00:19:08.503 [2024-12-05T19:39:01.940Z] Total : 8933.62 34.90 0.00 0.00 14296.78 174.08 4026531.84 00:19:08.503 19:39:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:08.762 19:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:19:08.762 19:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:08.762 19:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:19:08.762 19:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:08.762 19:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:19:09.025 19:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:09.025 19:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:19:09.025 19:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:09.025 19:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:09.025 rmmod nvme_tcp 00:19:09.025 rmmod nvme_fabrics 00:19:09.025 rmmod nvme_keyring 00:19:09.025 19:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:09.025 19:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:19:09.025 19:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:19:09.025 19:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 76383 ']' 00:19:09.025 19:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 76383 00:19:09.025 19:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 76383 ']' 00:19:09.025 19:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 76383 00:19:09.025 19:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:19:09.025 19:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:09.025 19:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76383 00:19:09.025 19:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:09.025 19:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:09.025 killing process with pid 76383 00:19:09.025 19:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76383' 00:19:09.025 19:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 76383 00:19:09.025 19:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 76383 00:19:09.284 19:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:09.284 19:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:09.284 19:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:09.284 19:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:19:09.284 19:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:19:09.284 19:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:09.284 19:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:19:09.284 19:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:09.284 19:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:09.284 19:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:09.284 19:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:09.284 19:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:09.284 19:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:09.284 19:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:09.284 19:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:09.284 19:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:09.284 19:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:09.284 19:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:09.284 19:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:09.284 19:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:09.284 19:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:09.543 19:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:09.544 19:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:09.544 19:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:09.544 19:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:09.544 19:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:09.544 19:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@300 -- # return 0 00:19:09.544 00:19:09.544 real 0m40.914s 00:19:09.544 user 2m13.101s 00:19:09.544 sys 0m12.098s 00:19:09.544 19:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:09.544 19:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:19:09.544 ************************************ 00:19:09.544 END TEST nvmf_host_multipath_status 00:19:09.544 ************************************ 00:19:09.544 19:39:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:19:09.544 19:39:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:09.544 19:39:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:09.544 19:39:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:09.544 ************************************ 00:19:09.544 START TEST nvmf_discovery_remove_ifc 00:19:09.544 ************************************ 00:19:09.544 19:39:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:19:09.544 * Looking for test storage... 00:19:09.544 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:09.544 19:39:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:09.544 19:39:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:09.544 19:39:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lcov --version 00:19:09.804 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:09.804 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:09.804 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:09.804 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:09.804 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:19:09.804 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:19:09.804 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:19:09.804 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:19:09.804 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:19:09.804 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:19:09.804 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:19:09.804 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:09.804 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:19:09.804 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:19:09.804 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:09.804 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:09.804 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:19:09.804 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:19:09.804 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:09.804 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:19:09.804 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:19:09.804 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:19:09.804 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:19:09.804 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:09.804 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:19:09.804 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:19:09.804 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:09.804 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:09.804 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:19:09.804 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:09.804 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:09.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:09.804 --rc genhtml_branch_coverage=1 00:19:09.804 --rc genhtml_function_coverage=1 00:19:09.804 --rc genhtml_legend=1 00:19:09.804 --rc geninfo_all_blocks=1 00:19:09.804 --rc geninfo_unexecuted_blocks=1 00:19:09.804 00:19:09.804 ' 00:19:09.804 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:09.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:09.804 --rc genhtml_branch_coverage=1 00:19:09.804 --rc genhtml_function_coverage=1 00:19:09.804 --rc genhtml_legend=1 00:19:09.804 --rc geninfo_all_blocks=1 00:19:09.804 --rc geninfo_unexecuted_blocks=1 00:19:09.804 00:19:09.804 ' 00:19:09.804 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:09.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:09.804 --rc genhtml_branch_coverage=1 00:19:09.804 --rc genhtml_function_coverage=1 00:19:09.804 --rc genhtml_legend=1 00:19:09.804 --rc geninfo_all_blocks=1 00:19:09.804 --rc geninfo_unexecuted_blocks=1 00:19:09.804 00:19:09.804 ' 00:19:09.804 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:09.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:09.804 --rc genhtml_branch_coverage=1 00:19:09.804 --rc genhtml_function_coverage=1 00:19:09.804 --rc genhtml_legend=1 00:19:09.804 --rc geninfo_all_blocks=1 00:19:09.804 --rc geninfo_unexecuted_blocks=1 00:19:09.804 00:19:09.804 ' 00:19:09.804 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:09.805 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:19:09.805 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:09.805 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:09.805 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:09.805 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:09.805 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:09.805 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:09.805 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:09.805 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:09.805 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:09.805 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:09.805 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:19:09.805 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:19:09.805 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:09.805 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:09.805 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:09.805 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:09.805 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:09.805 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:19:09.805 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:09.805 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:09.805 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:09.805 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:09.805 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:09.805 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:09.805 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:19:09.805 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:09.805 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:19:09.805 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:09.805 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:09.805 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:09.805 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:09.805 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:09.805 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:09.805 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:09.805 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:09.805 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:09.805 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:09.805 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:19:09.805 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:19:09.805 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:19:09.805 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:19:09.805 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:19:09.805 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:19:09.805 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:19:09.805 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:09.805 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:09.805 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:09.805 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:09.805 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:09.805 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:09.805 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:09.805 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:09.805 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:09.805 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:09.805 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:09.805 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:09.805 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:09.805 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:09.805 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:09.805 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:09.805 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:09.805 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:09.805 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:09.805 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:09.805 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:09.805 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:09.805 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:09.805 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:09.805 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:09.805 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:09.805 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:09.805 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:09.805 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:09.805 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:09.805 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:09.805 Cannot find device "nvmf_init_br" 00:19:09.805 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:19:09.805 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:09.805 Cannot find device "nvmf_init_br2" 00:19:09.805 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:19:09.805 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:09.805 Cannot find device "nvmf_tgt_br" 00:19:09.805 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # true 00:19:09.805 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:09.805 Cannot find device "nvmf_tgt_br2" 00:19:09.805 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # true 00:19:09.805 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:09.805 Cannot find device "nvmf_init_br" 00:19:09.805 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # true 00:19:09.805 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:09.805 Cannot find device "nvmf_init_br2" 00:19:09.805 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # true 00:19:09.805 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:09.806 Cannot find device "nvmf_tgt_br" 00:19:09.806 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # true 00:19:09.806 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:09.806 Cannot find device "nvmf_tgt_br2" 00:19:09.806 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # true 00:19:09.806 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:09.806 Cannot find device "nvmf_br" 00:19:09.806 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # true 00:19:09.806 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:09.806 Cannot find device "nvmf_init_if" 00:19:09.806 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # true 00:19:09.806 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:09.806 Cannot find device "nvmf_init_if2" 00:19:09.806 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # true 00:19:09.806 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:09.806 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:09.806 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # true 00:19:09.806 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:09.806 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:09.806 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # true 00:19:09.806 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:09.806 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:09.806 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:09.806 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:10.065 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:10.065 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:10.065 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:10.065 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:10.065 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:10.065 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:10.065 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:10.065 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:10.065 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:10.065 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:10.065 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:10.065 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:10.065 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:10.065 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:10.065 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:10.065 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:10.065 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:10.065 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:10.065 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:10.065 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:10.065 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:10.065 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:10.065 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:10.065 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:10.065 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:10.065 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:10.065 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:10.065 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:10.065 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:10.065 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:10.065 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:19:10.065 00:19:10.065 --- 10.0.0.3 ping statistics --- 00:19:10.065 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:10.065 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:19:10.065 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:10.065 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:10.065 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:19:10.065 00:19:10.065 --- 10.0.0.4 ping statistics --- 00:19:10.065 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:10.065 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:19:10.065 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:10.324 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:10.324 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:19:10.324 00:19:10.324 --- 10.0.0.1 ping statistics --- 00:19:10.324 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:10.324 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:19:10.324 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:10.324 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:10.324 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:19:10.324 00:19:10.324 --- 10.0.0.2 ping statistics --- 00:19:10.324 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:10.324 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:19:10.324 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:10.324 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@461 -- # return 0 00:19:10.324 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:10.324 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:10.324 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:10.324 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:10.324 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:10.324 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:10.324 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:10.324 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:19:10.324 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:10.324 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:10.324 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:10.324 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=77288 00:19:10.324 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:10.324 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 77288 00:19:10.324 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 77288 ']' 00:19:10.324 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:10.324 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:10.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:10.324 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:10.324 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:10.324 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:10.324 [2024-12-05 19:39:03.601265] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:19:10.324 [2024-12-05 19:39:03.601383] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:10.324 [2024-12-05 19:39:03.756626] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:10.584 [2024-12-05 19:39:03.817066] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:10.584 [2024-12-05 19:39:03.817168] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:10.584 [2024-12-05 19:39:03.817196] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:10.584 [2024-12-05 19:39:03.817207] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:10.584 [2024-12-05 19:39:03.817216] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:10.584 [2024-12-05 19:39:03.817690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:10.584 [2024-12-05 19:39:03.877052] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:10.584 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:10.584 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:19:10.584 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:10.584 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:10.584 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:10.584 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:10.584 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:19:10.584 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.584 19:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:10.584 [2024-12-05 19:39:04.005994] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:10.584 [2024-12-05 19:39:04.014145] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:19:10.843 null0 00:19:10.843 [2024-12-05 19:39:04.046031] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:10.843 19:39:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.843 19:39:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=77313 00:19:10.843 19:39:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 77313 /tmp/host.sock 00:19:10.843 19:39:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:19:10.843 19:39:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 77313 ']' 00:19:10.843 19:39:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:19:10.843 19:39:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:10.843 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:19:10.843 19:39:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:19:10.843 19:39:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:10.843 19:39:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:10.843 [2024-12-05 19:39:04.129427] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:19:10.843 [2024-12-05 19:39:04.129571] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77313 ] 00:19:10.843 [2024-12-05 19:39:04.277688] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:11.102 [2024-12-05 19:39:04.342948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:11.102 19:39:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:11.102 19:39:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:19:11.102 19:39:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:11.102 19:39:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:19:11.102 19:39:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.102 19:39:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:11.102 19:39:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.102 19:39:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:19:11.102 19:39:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.102 19:39:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:11.102 [2024-12-05 19:39:04.454417] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:11.102 19:39:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.102 19:39:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:19:11.102 19:39:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.102 19:39:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:12.479 [2024-12-05 19:39:05.519041] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:19:12.479 [2024-12-05 19:39:05.519103] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:19:12.479 [2024-12-05 19:39:05.519154] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:19:12.479 [2024-12-05 19:39:05.525089] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:19:12.479 [2024-12-05 19:39:05.579545] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:19:12.479 [2024-12-05 19:39:05.580774] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1f08f00:1 started. 00:19:12.479 [2024-12-05 19:39:05.582835] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:19:12.479 [2024-12-05 19:39:05.582911] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:19:12.479 [2024-12-05 19:39:05.582940] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:19:12.480 [2024-12-05 19:39:05.582956] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:19:12.480 [2024-12-05 19:39:05.582982] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:19:12.480 19:39:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.480 19:39:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:19:12.480 19:39:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:12.480 [2024-12-05 19:39:05.587947] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1f08f00 was disconnected and freed. delete nvme_qpair. 00:19:12.480 19:39:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:12.480 19:39:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:12.480 19:39:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.480 19:39:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:12.480 19:39:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:12.480 19:39:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:12.480 19:39:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.480 19:39:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:19:12.480 19:39:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.3/24 dev nvmf_tgt_if 00:19:12.480 19:39:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:19:12.480 19:39:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:19:12.480 19:39:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:12.480 19:39:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:12.480 19:39:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:12.480 19:39:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.480 19:39:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:12.480 19:39:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:12.480 19:39:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:12.480 19:39:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.480 19:39:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:12.480 19:39:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:13.442 19:39:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:13.442 19:39:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:13.442 19:39:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:13.442 19:39:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.442 19:39:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:13.442 19:39:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:13.442 19:39:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:13.442 19:39:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.442 19:39:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:13.442 19:39:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:14.376 19:39:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:14.376 19:39:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:14.376 19:39:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.376 19:39:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:14.376 19:39:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:14.376 19:39:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:14.376 19:39:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:14.376 19:39:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.635 19:39:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:14.635 19:39:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:15.571 19:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:15.571 19:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:15.571 19:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:15.571 19:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:15.571 19:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.571 19:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:15.571 19:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:15.571 19:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.571 19:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:15.571 19:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:16.503 19:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:16.503 19:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:16.503 19:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:16.503 19:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.503 19:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:16.503 19:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:16.503 19:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:16.761 19:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.761 19:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:16.761 19:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:17.696 19:39:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:17.696 19:39:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:17.696 19:39:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:17.696 19:39:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.696 19:39:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:17.696 19:39:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:17.696 19:39:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:17.696 19:39:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.696 [2024-12-05 19:39:11.010585] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:19:17.696 [2024-12-05 19:39:11.010669] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:17.696 [2024-12-05 19:39:11.010716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.696 [2024-12-05 19:39:11.010729] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:17.696 [2024-12-05 19:39:11.010739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.696 [2024-12-05 19:39:11.010749] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:17.696 [2024-12-05 19:39:11.010759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.696 [2024-12-05 19:39:11.010769] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:17.696 [2024-12-05 19:39:11.010778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.696 [2024-12-05 19:39:11.010789] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:17.696 [2024-12-05 19:39:11.010798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.696 [2024-12-05 19:39:11.010808] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee4fc0 is same with the state(6) to be set 00:19:17.696 [2024-12-05 19:39:11.020583] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee4fc0 (9): Bad file descriptor 00:19:17.696 [2024-12-05 19:39:11.030628] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:19:17.696 [2024-12-05 19:39:11.030669] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:19:17.696 [2024-12-05 19:39:11.030675] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:19:17.696 [2024-12-05 19:39:11.030681] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:19:17.696 [2024-12-05 19:39:11.030748] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:19:17.696 19:39:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:17.696 19:39:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:18.636 19:39:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:18.637 19:39:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:18.637 19:39:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:18.637 19:39:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:18.637 19:39:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.637 19:39:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:18.637 19:39:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:18.637 [2024-12-05 19:39:12.055248] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:19:18.637 [2024-12-05 19:39:12.055372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee4fc0 with addr=10.0.0.3, port=4420 00:19:18.637 [2024-12-05 19:39:12.055408] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee4fc0 is same with the state(6) to be set 00:19:18.637 [2024-12-05 19:39:12.055473] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee4fc0 (9): Bad file descriptor 00:19:18.637 [2024-12-05 19:39:12.056399] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:19:18.637 [2024-12-05 19:39:12.056494] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:19:18.637 [2024-12-05 19:39:12.056518] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:19:18.637 [2024-12-05 19:39:12.056538] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:19:18.637 [2024-12-05 19:39:12.056556] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:19:18.637 [2024-12-05 19:39:12.056569] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:19:18.637 [2024-12-05 19:39:12.056580] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:19:18.637 [2024-12-05 19:39:12.056600] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:19:18.637 [2024-12-05 19:39:12.056611] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:19:18.895 19:39:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.895 19:39:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:18.895 19:39:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:19.829 [2024-12-05 19:39:13.056689] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:19:19.829 [2024-12-05 19:39:13.056772] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:19:19.829 [2024-12-05 19:39:13.056803] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:19:19.829 [2024-12-05 19:39:13.056814] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:19:19.829 [2024-12-05 19:39:13.056824] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:19:19.829 [2024-12-05 19:39:13.056833] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:19:19.829 [2024-12-05 19:39:13.056840] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:19:19.829 [2024-12-05 19:39:13.056845] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:19:19.829 [2024-12-05 19:39:13.056877] bdev_nvme.c:7262:remove_discovery_entry: *INFO*: Discovery[10.0.0.3:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 00:19:19.829 [2024-12-05 19:39:13.056931] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:19.829 [2024-12-05 19:39:13.056945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.829 [2024-12-05 19:39:13.056958] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:19.829 [2024-12-05 19:39:13.056967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.829 [2024-12-05 19:39:13.056977] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:19.830 [2024-12-05 19:39:13.056985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.830 [2024-12-05 19:39:13.056995] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:19.830 [2024-12-05 19:39:13.057003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.830 [2024-12-05 19:39:13.057012] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:19.830 [2024-12-05 19:39:13.057021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.830 [2024-12-05 19:39:13.057029] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:19:19.830 [2024-12-05 19:39:13.057049] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e70a20 (9): Bad file descriptor 00:19:19.830 [2024-12-05 19:39:13.057785] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:19:19.830 [2024-12-05 19:39:13.057824] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:19:19.830 19:39:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:19.830 19:39:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:19.830 19:39:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:19.830 19:39:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.830 19:39:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:19.830 19:39:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:19.830 19:39:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:19.830 19:39:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.830 19:39:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:19:19.830 19:39:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:19.830 19:39:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:19.830 19:39:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:19:19.830 19:39:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:19.830 19:39:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:19.830 19:39:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:19.830 19:39:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.830 19:39:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:19.830 19:39:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:19.830 19:39:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:19.830 19:39:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.830 19:39:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:19:19.830 19:39:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:21.207 19:39:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:21.207 19:39:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:21.207 19:39:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:21.207 19:39:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.207 19:39:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:21.207 19:39:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:21.207 19:39:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:21.207 19:39:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.207 19:39:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:19:21.207 19:39:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:21.774 [2024-12-05 19:39:15.064284] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:19:21.774 [2024-12-05 19:39:15.064347] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:19:21.774 [2024-12-05 19:39:15.064376] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:19:21.774 [2024-12-05 19:39:15.070331] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme1 00:19:21.774 [2024-12-05 19:39:15.124800] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4420 00:19:21.774 [2024-12-05 19:39:15.125953] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x1f111d0:1 started. 00:19:21.774 [2024-12-05 19:39:15.127665] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:19:21.774 [2024-12-05 19:39:15.127734] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:19:21.774 [2024-12-05 19:39:15.127798] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:19:21.774 [2024-12-05 19:39:15.127825] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme1 done 00:19:21.774 [2024-12-05 19:39:15.127865] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:19:21.774 [2024-12-05 19:39:15.132974] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x1f111d0 was disconnected and freed. delete nvme_qpair. 00:19:22.034 19:39:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:22.034 19:39:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:22.034 19:39:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:22.034 19:39:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.034 19:39:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:22.034 19:39:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:22.034 19:39:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:22.034 19:39:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.034 19:39:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:19:22.034 19:39:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:19:22.034 19:39:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 77313 00:19:22.034 19:39:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 77313 ']' 00:19:22.034 19:39:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 77313 00:19:22.034 19:39:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:19:22.034 19:39:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:22.034 19:39:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77313 00:19:22.034 killing process with pid 77313 00:19:22.034 19:39:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:22.034 19:39:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:22.034 19:39:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77313' 00:19:22.034 19:39:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 77313 00:19:22.034 19:39:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 77313 00:19:22.292 19:39:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:19:22.292 19:39:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:22.292 19:39:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:19:22.292 19:39:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:22.292 19:39:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:19:22.292 19:39:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:22.292 19:39:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:22.292 rmmod nvme_tcp 00:19:22.292 rmmod nvme_fabrics 00:19:22.292 rmmod nvme_keyring 00:19:22.292 19:39:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:22.292 19:39:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:19:22.292 19:39:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:19:22.292 19:39:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 77288 ']' 00:19:22.292 19:39:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 77288 00:19:22.292 19:39:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 77288 ']' 00:19:22.292 19:39:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 77288 00:19:22.292 19:39:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:19:22.292 19:39:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:22.292 19:39:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77288 00:19:22.550 killing process with pid 77288 00:19:22.550 19:39:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:22.550 19:39:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:22.550 19:39:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77288' 00:19:22.550 19:39:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 77288 00:19:22.550 19:39:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 77288 00:19:22.550 19:39:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:22.550 19:39:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:22.550 19:39:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:22.550 19:39:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:19:22.550 19:39:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:19:22.550 19:39:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:22.550 19:39:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:19:22.550 19:39:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:22.550 19:39:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:22.550 19:39:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:22.550 19:39:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:22.550 19:39:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:22.807 19:39:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:22.807 19:39:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:22.807 19:39:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:22.807 19:39:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:22.807 19:39:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:22.807 19:39:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:22.807 19:39:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:22.807 19:39:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:22.807 19:39:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:22.807 19:39:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:22.807 19:39:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:22.807 19:39:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:22.807 19:39:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:22.807 19:39:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:22.807 19:39:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@300 -- # return 0 00:19:22.807 00:19:22.807 real 0m13.338s 00:19:22.807 user 0m22.524s 00:19:22.807 sys 0m2.505s 00:19:22.807 19:39:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:22.807 ************************************ 00:19:22.807 END TEST nvmf_discovery_remove_ifc 00:19:22.807 ************************************ 00:19:22.807 19:39:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:22.808 19:39:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:19:22.808 19:39:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:22.808 19:39:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:22.808 19:39:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:22.808 ************************************ 00:19:22.808 START TEST nvmf_identify_kernel_target 00:19:22.808 ************************************ 00:19:22.808 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:19:23.066 * Looking for test storage... 00:19:23.066 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:23.066 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:23.066 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lcov --version 00:19:23.066 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:23.066 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:23.066 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:23.066 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:23.066 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:23.066 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:19:23.066 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:19:23.066 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:19:23.066 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:19:23.066 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:19:23.066 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:19:23.066 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:19:23.066 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:23.066 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:19:23.066 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:19:23.066 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:23.066 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:23.066 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:19:23.066 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:19:23.066 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:23.066 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:19:23.066 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:19:23.066 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:19:23.066 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:19:23.066 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:23.066 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:19:23.066 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:19:23.066 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:23.066 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:23.066 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:19:23.066 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:23.066 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:23.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:23.066 --rc genhtml_branch_coverage=1 00:19:23.066 --rc genhtml_function_coverage=1 00:19:23.066 --rc genhtml_legend=1 00:19:23.066 --rc geninfo_all_blocks=1 00:19:23.066 --rc geninfo_unexecuted_blocks=1 00:19:23.066 00:19:23.066 ' 00:19:23.066 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:23.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:23.066 --rc genhtml_branch_coverage=1 00:19:23.066 --rc genhtml_function_coverage=1 00:19:23.066 --rc genhtml_legend=1 00:19:23.066 --rc geninfo_all_blocks=1 00:19:23.066 --rc geninfo_unexecuted_blocks=1 00:19:23.066 00:19:23.066 ' 00:19:23.066 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:23.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:23.066 --rc genhtml_branch_coverage=1 00:19:23.066 --rc genhtml_function_coverage=1 00:19:23.066 --rc genhtml_legend=1 00:19:23.066 --rc geninfo_all_blocks=1 00:19:23.066 --rc geninfo_unexecuted_blocks=1 00:19:23.066 00:19:23.066 ' 00:19:23.066 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:23.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:23.066 --rc genhtml_branch_coverage=1 00:19:23.066 --rc genhtml_function_coverage=1 00:19:23.066 --rc genhtml_legend=1 00:19:23.066 --rc geninfo_all_blocks=1 00:19:23.066 --rc geninfo_unexecuted_blocks=1 00:19:23.066 00:19:23.066 ' 00:19:23.066 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:23.066 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:19:23.066 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:23.066 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:23.066 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:23.067 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:23.067 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:23.067 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:23.067 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:23.067 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:23.067 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:23.067 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:23.067 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:19:23.067 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:19:23.067 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:23.067 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:23.067 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:23.067 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:23.067 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:23.067 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:19:23.067 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:23.067 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:23.067 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:23.067 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:23.067 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:23.067 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:23.067 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:19:23.067 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:23.067 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:19:23.067 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:23.067 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:23.067 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:23.067 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:23.067 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:23.067 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:23.067 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:23.067 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:23.067 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:23.067 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:23.067 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:19:23.067 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:23.067 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:23.067 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:23.067 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:23.067 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:23.067 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:23.067 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:23.067 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:23.067 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:23.067 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:23.067 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:23.067 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:23.067 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:23.067 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:23.067 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:23.067 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:23.067 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:23.067 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:23.067 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:23.067 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:23.067 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:23.067 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:23.067 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:23.067 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:23.067 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:23.067 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:23.067 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:23.067 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:23.067 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:23.067 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:23.067 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:23.067 Cannot find device "nvmf_init_br" 00:19:23.067 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:19:23.067 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:23.067 Cannot find device "nvmf_init_br2" 00:19:23.067 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:19:23.067 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:23.067 Cannot find device "nvmf_tgt_br" 00:19:23.067 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # true 00:19:23.067 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:23.067 Cannot find device "nvmf_tgt_br2" 00:19:23.067 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # true 00:19:23.067 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:23.067 Cannot find device "nvmf_init_br" 00:19:23.067 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # true 00:19:23.067 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:23.326 Cannot find device "nvmf_init_br2" 00:19:23.326 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # true 00:19:23.326 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:23.326 Cannot find device "nvmf_tgt_br" 00:19:23.326 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # true 00:19:23.326 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:23.326 Cannot find device "nvmf_tgt_br2" 00:19:23.326 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # true 00:19:23.326 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:23.326 Cannot find device "nvmf_br" 00:19:23.326 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # true 00:19:23.326 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:23.326 Cannot find device "nvmf_init_if" 00:19:23.326 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # true 00:19:23.326 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:23.326 Cannot find device "nvmf_init_if2" 00:19:23.326 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # true 00:19:23.326 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:23.326 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:23.326 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # true 00:19:23.326 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:23.326 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:23.326 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # true 00:19:23.326 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:23.326 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:23.326 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:23.326 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:23.326 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:23.326 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:23.326 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:23.326 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:23.326 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:23.326 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:23.326 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:23.326 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:23.326 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:23.326 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:23.326 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:23.326 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:23.326 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:23.326 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:23.326 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:23.326 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:23.326 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:23.326 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:23.326 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:23.326 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:23.326 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:23.585 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:23.585 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:23.585 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:23.585 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:23.585 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:23.585 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:23.585 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:23.585 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:23.585 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:23.585 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:19:23.585 00:19:23.585 --- 10.0.0.3 ping statistics --- 00:19:23.585 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:23.585 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:19:23.585 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:23.585 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:23.585 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.070 ms 00:19:23.585 00:19:23.585 --- 10.0.0.4 ping statistics --- 00:19:23.585 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:23.585 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:19:23.585 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:23.585 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:23.585 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:19:23.585 00:19:23.585 --- 10.0.0.1 ping statistics --- 00:19:23.585 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:23.585 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:19:23.585 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:23.585 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:23.585 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:19:23.585 00:19:23.585 --- 10.0.0.2 ping statistics --- 00:19:23.585 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:23.585 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:19:23.585 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:23.585 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@461 -- # return 0 00:19:23.585 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:23.585 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:23.585 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:23.585 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:23.585 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:23.585 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:23.585 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:23.585 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:19:23.585 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:19:23.585 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:19:23.585 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:23.585 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:23.585 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:23.585 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:23.585 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:23.585 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:23.585 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:23.585 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:23.586 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:23.586 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:19:23.586 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:19:23.586 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:19:23.586 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:19:23.586 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:23.586 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:19:23.586 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:19:23.586 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:19:23.586 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:19:23.586 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:19:23.586 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:19:23.586 19:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:19:23.845 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:23.845 Waiting for block devices as requested 00:19:23.845 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:19:24.106 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:19:24.106 19:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:19:24.106 19:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:19:24.106 19:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:19:24.106 19:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:19:24.106 19:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:19:24.106 19:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:24.106 19:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:19:24.106 19:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:19:24.106 19:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:19:24.106 No valid GPT data, bailing 00:19:24.106 19:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:19:24.106 19:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:19:24.106 19:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:19:24.106 19:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:19:24.106 19:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:19:24.106 19:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:19:24.106 19:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:19:24.106 19:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:19:24.106 19:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:19:24.106 19:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:24.106 19:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:19:24.106 19:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:19:24.106 19:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:19:24.364 No valid GPT data, bailing 00:19:24.364 19:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:19:24.364 19:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:19:24.364 19:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:19:24.364 19:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:19:24.364 19:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:19:24.364 19:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:19:24.364 19:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:19:24.364 19:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:19:24.364 19:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:19:24.364 19:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:24.364 19:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:19:24.364 19:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:19:24.364 19:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:19:24.364 No valid GPT data, bailing 00:19:24.364 19:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:19:24.364 19:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:19:24.364 19:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:19:24.364 19:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:19:24.364 19:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:19:24.364 19:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:19:24.364 19:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:19:24.364 19:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:19:24.364 19:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:19:24.364 19:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:24.364 19:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:19:24.364 19:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:19:24.364 19:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:19:24.364 No valid GPT data, bailing 00:19:24.364 19:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:19:24.364 19:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:19:24.364 19:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:19:24.364 19:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:19:24.364 19:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:19:24.364 19:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:24.365 19:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:19:24.365 19:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:19:24.365 19:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:19:24.365 19:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:19:24.365 19:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:19:24.365 19:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:19:24.365 19:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:19:24.365 19:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:19:24.365 19:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:19:24.365 19:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:19:24.365 19:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:19:24.365 19:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --hostid=bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -a 10.0.0.1 -t tcp -s 4420 00:19:24.623 00:19:24.623 Discovery Log Number of Records 2, Generation counter 2 00:19:24.623 =====Discovery Log Entry 0====== 00:19:24.623 trtype: tcp 00:19:24.623 adrfam: ipv4 00:19:24.623 subtype: current discovery subsystem 00:19:24.623 treq: not specified, sq flow control disable supported 00:19:24.623 portid: 1 00:19:24.623 trsvcid: 4420 00:19:24.623 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:19:24.623 traddr: 10.0.0.1 00:19:24.623 eflags: none 00:19:24.623 sectype: none 00:19:24.623 =====Discovery Log Entry 1====== 00:19:24.623 trtype: tcp 00:19:24.623 adrfam: ipv4 00:19:24.623 subtype: nvme subsystem 00:19:24.623 treq: not specified, sq flow control disable supported 00:19:24.623 portid: 1 00:19:24.623 trsvcid: 4420 00:19:24.623 subnqn: nqn.2016-06.io.spdk:testnqn 00:19:24.623 traddr: 10.0.0.1 00:19:24.623 eflags: none 00:19:24.623 sectype: none 00:19:24.623 19:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:19:24.623 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:19:24.623 ===================================================== 00:19:24.623 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:19:24.623 ===================================================== 00:19:24.623 Controller Capabilities/Features 00:19:24.623 ================================ 00:19:24.623 Vendor ID: 0000 00:19:24.623 Subsystem Vendor ID: 0000 00:19:24.623 Serial Number: 82a2da6229f4bda27b5a 00:19:24.623 Model Number: Linux 00:19:24.623 Firmware Version: 6.8.9-20 00:19:24.623 Recommended Arb Burst: 0 00:19:24.623 IEEE OUI Identifier: 00 00 00 00:19:24.623 Multi-path I/O 00:19:24.623 May have multiple subsystem ports: No 00:19:24.623 May have multiple controllers: No 00:19:24.623 Associated with SR-IOV VF: No 00:19:24.623 Max Data Transfer Size: Unlimited 00:19:24.623 Max Number of Namespaces: 0 00:19:24.623 Max Number of I/O Queues: 1024 00:19:24.623 NVMe Specification Version (VS): 1.3 00:19:24.623 NVMe Specification Version (Identify): 1.3 00:19:24.623 Maximum Queue Entries: 1024 00:19:24.623 Contiguous Queues Required: No 00:19:24.623 Arbitration Mechanisms Supported 00:19:24.623 Weighted Round Robin: Not Supported 00:19:24.623 Vendor Specific: Not Supported 00:19:24.623 Reset Timeout: 7500 ms 00:19:24.623 Doorbell Stride: 4 bytes 00:19:24.623 NVM Subsystem Reset: Not Supported 00:19:24.623 Command Sets Supported 00:19:24.623 NVM Command Set: Supported 00:19:24.623 Boot Partition: Not Supported 00:19:24.623 Memory Page Size Minimum: 4096 bytes 00:19:24.623 Memory Page Size Maximum: 4096 bytes 00:19:24.623 Persistent Memory Region: Not Supported 00:19:24.623 Optional Asynchronous Events Supported 00:19:24.623 Namespace Attribute Notices: Not Supported 00:19:24.623 Firmware Activation Notices: Not Supported 00:19:24.623 ANA Change Notices: Not Supported 00:19:24.623 PLE Aggregate Log Change Notices: Not Supported 00:19:24.623 LBA Status Info Alert Notices: Not Supported 00:19:24.624 EGE Aggregate Log Change Notices: Not Supported 00:19:24.624 Normal NVM Subsystem Shutdown event: Not Supported 00:19:24.624 Zone Descriptor Change Notices: Not Supported 00:19:24.624 Discovery Log Change Notices: Supported 00:19:24.624 Controller Attributes 00:19:24.624 128-bit Host Identifier: Not Supported 00:19:24.624 Non-Operational Permissive Mode: Not Supported 00:19:24.624 NVM Sets: Not Supported 00:19:24.624 Read Recovery Levels: Not Supported 00:19:24.624 Endurance Groups: Not Supported 00:19:24.624 Predictable Latency Mode: Not Supported 00:19:24.624 Traffic Based Keep ALive: Not Supported 00:19:24.624 Namespace Granularity: Not Supported 00:19:24.624 SQ Associations: Not Supported 00:19:24.624 UUID List: Not Supported 00:19:24.624 Multi-Domain Subsystem: Not Supported 00:19:24.624 Fixed Capacity Management: Not Supported 00:19:24.624 Variable Capacity Management: Not Supported 00:19:24.624 Delete Endurance Group: Not Supported 00:19:24.624 Delete NVM Set: Not Supported 00:19:24.624 Extended LBA Formats Supported: Not Supported 00:19:24.624 Flexible Data Placement Supported: Not Supported 00:19:24.624 00:19:24.624 Controller Memory Buffer Support 00:19:24.624 ================================ 00:19:24.624 Supported: No 00:19:24.624 00:19:24.624 Persistent Memory Region Support 00:19:24.624 ================================ 00:19:24.624 Supported: No 00:19:24.624 00:19:24.624 Admin Command Set Attributes 00:19:24.624 ============================ 00:19:24.624 Security Send/Receive: Not Supported 00:19:24.624 Format NVM: Not Supported 00:19:24.624 Firmware Activate/Download: Not Supported 00:19:24.624 Namespace Management: Not Supported 00:19:24.624 Device Self-Test: Not Supported 00:19:24.624 Directives: Not Supported 00:19:24.624 NVMe-MI: Not Supported 00:19:24.624 Virtualization Management: Not Supported 00:19:24.624 Doorbell Buffer Config: Not Supported 00:19:24.624 Get LBA Status Capability: Not Supported 00:19:24.624 Command & Feature Lockdown Capability: Not Supported 00:19:24.624 Abort Command Limit: 1 00:19:24.624 Async Event Request Limit: 1 00:19:24.624 Number of Firmware Slots: N/A 00:19:24.624 Firmware Slot 1 Read-Only: N/A 00:19:24.624 Firmware Activation Without Reset: N/A 00:19:24.624 Multiple Update Detection Support: N/A 00:19:24.624 Firmware Update Granularity: No Information Provided 00:19:24.624 Per-Namespace SMART Log: No 00:19:24.624 Asymmetric Namespace Access Log Page: Not Supported 00:19:24.624 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:19:24.624 Command Effects Log Page: Not Supported 00:19:24.624 Get Log Page Extended Data: Supported 00:19:24.624 Telemetry Log Pages: Not Supported 00:19:24.624 Persistent Event Log Pages: Not Supported 00:19:24.624 Supported Log Pages Log Page: May Support 00:19:24.624 Commands Supported & Effects Log Page: Not Supported 00:19:24.624 Feature Identifiers & Effects Log Page:May Support 00:19:24.624 NVMe-MI Commands & Effects Log Page: May Support 00:19:24.624 Data Area 4 for Telemetry Log: Not Supported 00:19:24.624 Error Log Page Entries Supported: 1 00:19:24.624 Keep Alive: Not Supported 00:19:24.624 00:19:24.624 NVM Command Set Attributes 00:19:24.624 ========================== 00:19:24.624 Submission Queue Entry Size 00:19:24.624 Max: 1 00:19:24.624 Min: 1 00:19:24.624 Completion Queue Entry Size 00:19:24.624 Max: 1 00:19:24.624 Min: 1 00:19:24.624 Number of Namespaces: 0 00:19:24.624 Compare Command: Not Supported 00:19:24.624 Write Uncorrectable Command: Not Supported 00:19:24.624 Dataset Management Command: Not Supported 00:19:24.624 Write Zeroes Command: Not Supported 00:19:24.624 Set Features Save Field: Not Supported 00:19:24.624 Reservations: Not Supported 00:19:24.624 Timestamp: Not Supported 00:19:24.624 Copy: Not Supported 00:19:24.624 Volatile Write Cache: Not Present 00:19:24.624 Atomic Write Unit (Normal): 1 00:19:24.624 Atomic Write Unit (PFail): 1 00:19:24.624 Atomic Compare & Write Unit: 1 00:19:24.624 Fused Compare & Write: Not Supported 00:19:24.624 Scatter-Gather List 00:19:24.624 SGL Command Set: Supported 00:19:24.624 SGL Keyed: Not Supported 00:19:24.624 SGL Bit Bucket Descriptor: Not Supported 00:19:24.624 SGL Metadata Pointer: Not Supported 00:19:24.624 Oversized SGL: Not Supported 00:19:24.624 SGL Metadata Address: Not Supported 00:19:24.624 SGL Offset: Supported 00:19:24.624 Transport SGL Data Block: Not Supported 00:19:24.624 Replay Protected Memory Block: Not Supported 00:19:24.624 00:19:24.624 Firmware Slot Information 00:19:24.624 ========================= 00:19:24.624 Active slot: 0 00:19:24.624 00:19:24.624 00:19:24.624 Error Log 00:19:24.624 ========= 00:19:24.624 00:19:24.624 Active Namespaces 00:19:24.624 ================= 00:19:24.624 Discovery Log Page 00:19:24.624 ================== 00:19:24.624 Generation Counter: 2 00:19:24.624 Number of Records: 2 00:19:24.624 Record Format: 0 00:19:24.624 00:19:24.624 Discovery Log Entry 0 00:19:24.624 ---------------------- 00:19:24.624 Transport Type: 3 (TCP) 00:19:24.624 Address Family: 1 (IPv4) 00:19:24.624 Subsystem Type: 3 (Current Discovery Subsystem) 00:19:24.624 Entry Flags: 00:19:24.624 Duplicate Returned Information: 0 00:19:24.624 Explicit Persistent Connection Support for Discovery: 0 00:19:24.624 Transport Requirements: 00:19:24.624 Secure Channel: Not Specified 00:19:24.624 Port ID: 1 (0x0001) 00:19:24.624 Controller ID: 65535 (0xffff) 00:19:24.624 Admin Max SQ Size: 32 00:19:24.624 Transport Service Identifier: 4420 00:19:24.624 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:19:24.624 Transport Address: 10.0.0.1 00:19:24.624 Discovery Log Entry 1 00:19:24.624 ---------------------- 00:19:24.624 Transport Type: 3 (TCP) 00:19:24.624 Address Family: 1 (IPv4) 00:19:24.624 Subsystem Type: 2 (NVM Subsystem) 00:19:24.624 Entry Flags: 00:19:24.624 Duplicate Returned Information: 0 00:19:24.624 Explicit Persistent Connection Support for Discovery: 0 00:19:24.624 Transport Requirements: 00:19:24.624 Secure Channel: Not Specified 00:19:24.624 Port ID: 1 (0x0001) 00:19:24.624 Controller ID: 65535 (0xffff) 00:19:24.624 Admin Max SQ Size: 32 00:19:24.624 Transport Service Identifier: 4420 00:19:24.624 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:19:24.624 Transport Address: 10.0.0.1 00:19:24.624 19:39:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:19:24.881 get_feature(0x01) failed 00:19:24.881 get_feature(0x02) failed 00:19:24.881 get_feature(0x04) failed 00:19:24.881 ===================================================== 00:19:24.881 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:19:24.881 ===================================================== 00:19:24.881 Controller Capabilities/Features 00:19:24.881 ================================ 00:19:24.881 Vendor ID: 0000 00:19:24.881 Subsystem Vendor ID: 0000 00:19:24.881 Serial Number: 0b9e32d69c219afd0bf2 00:19:24.881 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:19:24.881 Firmware Version: 6.8.9-20 00:19:24.881 Recommended Arb Burst: 6 00:19:24.881 IEEE OUI Identifier: 00 00 00 00:19:24.881 Multi-path I/O 00:19:24.881 May have multiple subsystem ports: Yes 00:19:24.881 May have multiple controllers: Yes 00:19:24.881 Associated with SR-IOV VF: No 00:19:24.881 Max Data Transfer Size: Unlimited 00:19:24.881 Max Number of Namespaces: 1024 00:19:24.881 Max Number of I/O Queues: 128 00:19:24.881 NVMe Specification Version (VS): 1.3 00:19:24.881 NVMe Specification Version (Identify): 1.3 00:19:24.881 Maximum Queue Entries: 1024 00:19:24.881 Contiguous Queues Required: No 00:19:24.881 Arbitration Mechanisms Supported 00:19:24.881 Weighted Round Robin: Not Supported 00:19:24.881 Vendor Specific: Not Supported 00:19:24.881 Reset Timeout: 7500 ms 00:19:24.882 Doorbell Stride: 4 bytes 00:19:24.882 NVM Subsystem Reset: Not Supported 00:19:24.882 Command Sets Supported 00:19:24.882 NVM Command Set: Supported 00:19:24.882 Boot Partition: Not Supported 00:19:24.882 Memory Page Size Minimum: 4096 bytes 00:19:24.882 Memory Page Size Maximum: 4096 bytes 00:19:24.882 Persistent Memory Region: Not Supported 00:19:24.882 Optional Asynchronous Events Supported 00:19:24.882 Namespace Attribute Notices: Supported 00:19:24.882 Firmware Activation Notices: Not Supported 00:19:24.882 ANA Change Notices: Supported 00:19:24.882 PLE Aggregate Log Change Notices: Not Supported 00:19:24.882 LBA Status Info Alert Notices: Not Supported 00:19:24.882 EGE Aggregate Log Change Notices: Not Supported 00:19:24.882 Normal NVM Subsystem Shutdown event: Not Supported 00:19:24.882 Zone Descriptor Change Notices: Not Supported 00:19:24.882 Discovery Log Change Notices: Not Supported 00:19:24.882 Controller Attributes 00:19:24.882 128-bit Host Identifier: Supported 00:19:24.882 Non-Operational Permissive Mode: Not Supported 00:19:24.882 NVM Sets: Not Supported 00:19:24.882 Read Recovery Levels: Not Supported 00:19:24.882 Endurance Groups: Not Supported 00:19:24.882 Predictable Latency Mode: Not Supported 00:19:24.882 Traffic Based Keep ALive: Supported 00:19:24.882 Namespace Granularity: Not Supported 00:19:24.882 SQ Associations: Not Supported 00:19:24.882 UUID List: Not Supported 00:19:24.882 Multi-Domain Subsystem: Not Supported 00:19:24.882 Fixed Capacity Management: Not Supported 00:19:24.882 Variable Capacity Management: Not Supported 00:19:24.882 Delete Endurance Group: Not Supported 00:19:24.882 Delete NVM Set: Not Supported 00:19:24.882 Extended LBA Formats Supported: Not Supported 00:19:24.882 Flexible Data Placement Supported: Not Supported 00:19:24.882 00:19:24.882 Controller Memory Buffer Support 00:19:24.882 ================================ 00:19:24.882 Supported: No 00:19:24.882 00:19:24.882 Persistent Memory Region Support 00:19:24.882 ================================ 00:19:24.882 Supported: No 00:19:24.882 00:19:24.882 Admin Command Set Attributes 00:19:24.882 ============================ 00:19:24.882 Security Send/Receive: Not Supported 00:19:24.882 Format NVM: Not Supported 00:19:24.882 Firmware Activate/Download: Not Supported 00:19:24.882 Namespace Management: Not Supported 00:19:24.882 Device Self-Test: Not Supported 00:19:24.882 Directives: Not Supported 00:19:24.882 NVMe-MI: Not Supported 00:19:24.882 Virtualization Management: Not Supported 00:19:24.882 Doorbell Buffer Config: Not Supported 00:19:24.882 Get LBA Status Capability: Not Supported 00:19:24.882 Command & Feature Lockdown Capability: Not Supported 00:19:24.882 Abort Command Limit: 4 00:19:24.882 Async Event Request Limit: 4 00:19:24.882 Number of Firmware Slots: N/A 00:19:24.882 Firmware Slot 1 Read-Only: N/A 00:19:24.882 Firmware Activation Without Reset: N/A 00:19:24.882 Multiple Update Detection Support: N/A 00:19:24.882 Firmware Update Granularity: No Information Provided 00:19:24.882 Per-Namespace SMART Log: Yes 00:19:24.882 Asymmetric Namespace Access Log Page: Supported 00:19:24.882 ANA Transition Time : 10 sec 00:19:24.882 00:19:24.882 Asymmetric Namespace Access Capabilities 00:19:24.882 ANA Optimized State : Supported 00:19:24.882 ANA Non-Optimized State : Supported 00:19:24.882 ANA Inaccessible State : Supported 00:19:24.882 ANA Persistent Loss State : Supported 00:19:24.882 ANA Change State : Supported 00:19:24.882 ANAGRPID is not changed : No 00:19:24.882 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:19:24.882 00:19:24.882 ANA Group Identifier Maximum : 128 00:19:24.882 Number of ANA Group Identifiers : 128 00:19:24.882 Max Number of Allowed Namespaces : 1024 00:19:24.882 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:19:24.882 Command Effects Log Page: Supported 00:19:24.882 Get Log Page Extended Data: Supported 00:19:24.882 Telemetry Log Pages: Not Supported 00:19:24.882 Persistent Event Log Pages: Not Supported 00:19:24.882 Supported Log Pages Log Page: May Support 00:19:24.882 Commands Supported & Effects Log Page: Not Supported 00:19:24.882 Feature Identifiers & Effects Log Page:May Support 00:19:24.882 NVMe-MI Commands & Effects Log Page: May Support 00:19:24.882 Data Area 4 for Telemetry Log: Not Supported 00:19:24.882 Error Log Page Entries Supported: 128 00:19:24.882 Keep Alive: Supported 00:19:24.882 Keep Alive Granularity: 1000 ms 00:19:24.882 00:19:24.882 NVM Command Set Attributes 00:19:24.882 ========================== 00:19:24.882 Submission Queue Entry Size 00:19:24.882 Max: 64 00:19:24.882 Min: 64 00:19:24.882 Completion Queue Entry Size 00:19:24.882 Max: 16 00:19:24.882 Min: 16 00:19:24.882 Number of Namespaces: 1024 00:19:24.882 Compare Command: Not Supported 00:19:24.882 Write Uncorrectable Command: Not Supported 00:19:24.882 Dataset Management Command: Supported 00:19:24.882 Write Zeroes Command: Supported 00:19:24.882 Set Features Save Field: Not Supported 00:19:24.882 Reservations: Not Supported 00:19:24.882 Timestamp: Not Supported 00:19:24.882 Copy: Not Supported 00:19:24.882 Volatile Write Cache: Present 00:19:24.882 Atomic Write Unit (Normal): 1 00:19:24.882 Atomic Write Unit (PFail): 1 00:19:24.882 Atomic Compare & Write Unit: 1 00:19:24.882 Fused Compare & Write: Not Supported 00:19:24.882 Scatter-Gather List 00:19:24.882 SGL Command Set: Supported 00:19:24.882 SGL Keyed: Not Supported 00:19:24.882 SGL Bit Bucket Descriptor: Not Supported 00:19:24.882 SGL Metadata Pointer: Not Supported 00:19:24.882 Oversized SGL: Not Supported 00:19:24.882 SGL Metadata Address: Not Supported 00:19:24.882 SGL Offset: Supported 00:19:24.882 Transport SGL Data Block: Not Supported 00:19:24.882 Replay Protected Memory Block: Not Supported 00:19:24.882 00:19:24.882 Firmware Slot Information 00:19:24.882 ========================= 00:19:24.882 Active slot: 0 00:19:24.882 00:19:24.882 Asymmetric Namespace Access 00:19:24.882 =========================== 00:19:24.882 Change Count : 0 00:19:24.882 Number of ANA Group Descriptors : 1 00:19:24.882 ANA Group Descriptor : 0 00:19:24.882 ANA Group ID : 1 00:19:24.882 Number of NSID Values : 1 00:19:24.882 Change Count : 0 00:19:24.882 ANA State : 1 00:19:24.882 Namespace Identifier : 1 00:19:24.882 00:19:24.882 Commands Supported and Effects 00:19:24.882 ============================== 00:19:24.882 Admin Commands 00:19:24.882 -------------- 00:19:24.882 Get Log Page (02h): Supported 00:19:24.882 Identify (06h): Supported 00:19:24.882 Abort (08h): Supported 00:19:24.882 Set Features (09h): Supported 00:19:24.882 Get Features (0Ah): Supported 00:19:24.882 Asynchronous Event Request (0Ch): Supported 00:19:24.882 Keep Alive (18h): Supported 00:19:24.882 I/O Commands 00:19:24.882 ------------ 00:19:24.882 Flush (00h): Supported 00:19:24.882 Write (01h): Supported LBA-Change 00:19:24.882 Read (02h): Supported 00:19:24.882 Write Zeroes (08h): Supported LBA-Change 00:19:24.882 Dataset Management (09h): Supported 00:19:24.882 00:19:24.882 Error Log 00:19:24.882 ========= 00:19:24.882 Entry: 0 00:19:24.882 Error Count: 0x3 00:19:24.882 Submission Queue Id: 0x0 00:19:24.882 Command Id: 0x5 00:19:24.882 Phase Bit: 0 00:19:24.882 Status Code: 0x2 00:19:24.882 Status Code Type: 0x0 00:19:24.883 Do Not Retry: 1 00:19:24.883 Error Location: 0x28 00:19:24.883 LBA: 0x0 00:19:24.883 Namespace: 0x0 00:19:24.883 Vendor Log Page: 0x0 00:19:24.883 ----------- 00:19:24.883 Entry: 1 00:19:24.883 Error Count: 0x2 00:19:24.883 Submission Queue Id: 0x0 00:19:24.883 Command Id: 0x5 00:19:24.883 Phase Bit: 0 00:19:24.883 Status Code: 0x2 00:19:24.883 Status Code Type: 0x0 00:19:24.883 Do Not Retry: 1 00:19:24.883 Error Location: 0x28 00:19:24.883 LBA: 0x0 00:19:24.883 Namespace: 0x0 00:19:24.883 Vendor Log Page: 0x0 00:19:24.883 ----------- 00:19:24.883 Entry: 2 00:19:24.883 Error Count: 0x1 00:19:24.883 Submission Queue Id: 0x0 00:19:24.883 Command Id: 0x4 00:19:24.883 Phase Bit: 0 00:19:24.883 Status Code: 0x2 00:19:24.883 Status Code Type: 0x0 00:19:24.883 Do Not Retry: 1 00:19:24.883 Error Location: 0x28 00:19:24.883 LBA: 0x0 00:19:24.883 Namespace: 0x0 00:19:24.883 Vendor Log Page: 0x0 00:19:24.883 00:19:24.883 Number of Queues 00:19:24.883 ================ 00:19:24.883 Number of I/O Submission Queues: 128 00:19:24.883 Number of I/O Completion Queues: 128 00:19:24.883 00:19:24.883 ZNS Specific Controller Data 00:19:24.883 ============================ 00:19:24.883 Zone Append Size Limit: 0 00:19:24.883 00:19:24.883 00:19:24.883 Active Namespaces 00:19:24.883 ================= 00:19:24.883 get_feature(0x05) failed 00:19:24.883 Namespace ID:1 00:19:24.883 Command Set Identifier: NVM (00h) 00:19:24.883 Deallocate: Supported 00:19:24.883 Deallocated/Unwritten Error: Not Supported 00:19:24.883 Deallocated Read Value: Unknown 00:19:24.883 Deallocate in Write Zeroes: Not Supported 00:19:24.883 Deallocated Guard Field: 0xFFFF 00:19:24.883 Flush: Supported 00:19:24.883 Reservation: Not Supported 00:19:24.883 Namespace Sharing Capabilities: Multiple Controllers 00:19:24.883 Size (in LBAs): 1310720 (5GiB) 00:19:24.883 Capacity (in LBAs): 1310720 (5GiB) 00:19:24.883 Utilization (in LBAs): 1310720 (5GiB) 00:19:24.883 UUID: af6369f5-689c-4a90-8cf6-8990ea620d55 00:19:24.883 Thin Provisioning: Not Supported 00:19:24.883 Per-NS Atomic Units: Yes 00:19:24.883 Atomic Boundary Size (Normal): 0 00:19:24.883 Atomic Boundary Size (PFail): 0 00:19:24.883 Atomic Boundary Offset: 0 00:19:24.883 NGUID/EUI64 Never Reused: No 00:19:24.883 ANA group ID: 1 00:19:24.883 Namespace Write Protected: No 00:19:24.883 Number of LBA Formats: 1 00:19:24.883 Current LBA Format: LBA Format #00 00:19:24.883 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:19:24.883 00:19:24.883 19:39:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:19:24.883 19:39:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:24.883 19:39:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:19:24.883 19:39:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:24.883 19:39:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:19:24.883 19:39:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:24.883 19:39:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:24.883 rmmod nvme_tcp 00:19:24.883 rmmod nvme_fabrics 00:19:24.883 19:39:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:24.883 19:39:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:19:24.883 19:39:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:19:24.883 19:39:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:19:24.883 19:39:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:24.883 19:39:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:24.883 19:39:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:24.883 19:39:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:19:24.883 19:39:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:24.883 19:39:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:19:24.883 19:39:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:19:24.883 19:39:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:24.883 19:39:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:24.883 19:39:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:25.140 19:39:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:25.140 19:39:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:25.140 19:39:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:25.140 19:39:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:25.140 19:39:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:25.140 19:39:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:25.140 19:39:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:25.140 19:39:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:25.140 19:39:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:25.140 19:39:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:25.141 19:39:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:25.141 19:39:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:25.141 19:39:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:25.141 19:39:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:25.141 19:39:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:25.141 19:39:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:25.141 19:39:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@300 -- # return 0 00:19:25.141 19:39:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:19:25.141 19:39:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:19:25.141 19:39:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:19:25.141 19:39:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:25.141 19:39:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:19:25.141 19:39:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:19:25.141 19:39:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:25.141 19:39:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:19:25.141 19:39:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:19:25.398 19:39:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:25.995 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:25.995 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:19:26.252 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:19:26.252 00:19:26.252 real 0m3.269s 00:19:26.252 user 0m1.162s 00:19:26.252 sys 0m1.530s 00:19:26.252 19:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:26.252 ************************************ 00:19:26.252 END TEST nvmf_identify_kernel_target 00:19:26.252 ************************************ 00:19:26.252 19:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.252 19:39:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:19:26.252 19:39:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:26.252 19:39:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:26.252 19:39:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:26.252 ************************************ 00:19:26.252 START TEST nvmf_auth_host 00:19:26.252 ************************************ 00:19:26.252 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:19:26.252 * Looking for test storage... 00:19:26.252 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:26.252 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:26.252 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lcov --version 00:19:26.252 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:26.511 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:26.511 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:26.511 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:26.511 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:26.511 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:19:26.511 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:19:26.511 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:19:26.511 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:19:26.511 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:19:26.511 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:19:26.511 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:19:26.511 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:26.511 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:19:26.511 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:19:26.511 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:26.511 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:26.511 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:19:26.511 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:19:26.511 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:26.511 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:19:26.511 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:19:26.511 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:19:26.511 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:19:26.511 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:26.511 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:19:26.511 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:19:26.511 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:26.511 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:26.512 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:19:26.512 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:26.512 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:26.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:26.512 --rc genhtml_branch_coverage=1 00:19:26.512 --rc genhtml_function_coverage=1 00:19:26.512 --rc genhtml_legend=1 00:19:26.512 --rc geninfo_all_blocks=1 00:19:26.512 --rc geninfo_unexecuted_blocks=1 00:19:26.512 00:19:26.512 ' 00:19:26.512 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:26.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:26.512 --rc genhtml_branch_coverage=1 00:19:26.512 --rc genhtml_function_coverage=1 00:19:26.512 --rc genhtml_legend=1 00:19:26.512 --rc geninfo_all_blocks=1 00:19:26.512 --rc geninfo_unexecuted_blocks=1 00:19:26.512 00:19:26.512 ' 00:19:26.512 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:26.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:26.512 --rc genhtml_branch_coverage=1 00:19:26.512 --rc genhtml_function_coverage=1 00:19:26.512 --rc genhtml_legend=1 00:19:26.512 --rc geninfo_all_blocks=1 00:19:26.512 --rc geninfo_unexecuted_blocks=1 00:19:26.512 00:19:26.512 ' 00:19:26.512 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:26.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:26.512 --rc genhtml_branch_coverage=1 00:19:26.512 --rc genhtml_function_coverage=1 00:19:26.512 --rc genhtml_legend=1 00:19:26.512 --rc geninfo_all_blocks=1 00:19:26.512 --rc geninfo_unexecuted_blocks=1 00:19:26.512 00:19:26.512 ' 00:19:26.512 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:26.512 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:19:26.512 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:26.512 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:26.512 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:26.512 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:26.512 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:26.512 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:26.512 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:26.512 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:26.512 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:26.512 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:26.512 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:19:26.512 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:19:26.512 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:26.512 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:26.512 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:26.512 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:26.512 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:26.512 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:19:26.512 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:26.512 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:26.512 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:26.512 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:26.512 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:26.512 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:26.512 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:19:26.512 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:26.512 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:19:26.512 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:26.512 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:26.512 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:26.512 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:26.512 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:26.512 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:26.512 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:26.512 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:26.512 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:26.512 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:26.512 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:19:26.512 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:19:26.512 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:19:26.512 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:19:26.512 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:26.512 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:19:26.512 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:19:26.512 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:19:26.512 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:19:26.512 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:26.512 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:26.512 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:26.512 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:26.512 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:26.512 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:26.512 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:26.512 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:26.512 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:26.512 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:26.512 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:26.512 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:26.512 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:26.512 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:26.512 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:26.512 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:26.512 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:26.512 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:26.512 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:26.512 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:26.512 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:26.512 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:26.512 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:26.512 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:26.512 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:26.512 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:26.512 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:26.512 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:26.513 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:26.513 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:26.513 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:26.513 Cannot find device "nvmf_init_br" 00:19:26.513 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:19:26.513 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:26.513 Cannot find device "nvmf_init_br2" 00:19:26.513 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:19:26.513 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:26.513 Cannot find device "nvmf_tgt_br" 00:19:26.513 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # true 00:19:26.513 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:26.513 Cannot find device "nvmf_tgt_br2" 00:19:26.513 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # true 00:19:26.513 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:26.513 Cannot find device "nvmf_init_br" 00:19:26.513 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # true 00:19:26.513 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:26.513 Cannot find device "nvmf_init_br2" 00:19:26.513 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # true 00:19:26.513 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:26.513 Cannot find device "nvmf_tgt_br" 00:19:26.513 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # true 00:19:26.513 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:26.513 Cannot find device "nvmf_tgt_br2" 00:19:26.513 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # true 00:19:26.513 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:26.513 Cannot find device "nvmf_br" 00:19:26.513 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # true 00:19:26.513 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:26.513 Cannot find device "nvmf_init_if" 00:19:26.513 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # true 00:19:26.513 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:26.513 Cannot find device "nvmf_init_if2" 00:19:26.513 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # true 00:19:26.513 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:26.513 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:26.513 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # true 00:19:26.513 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:26.513 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:26.513 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # true 00:19:26.513 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:26.513 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:26.513 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:26.513 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:26.771 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:26.771 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:26.771 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:26.771 19:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:26.771 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:26.771 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:26.771 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:26.771 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:26.771 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:26.771 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:26.771 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:26.771 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:26.771 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:26.771 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:26.771 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:26.771 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:26.771 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:26.771 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:26.771 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:26.771 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:26.771 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:26.771 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:26.771 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:26.771 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:26.771 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:26.771 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:26.771 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:26.771 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:26.771 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:26.771 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:26.771 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.130 ms 00:19:26.771 00:19:26.771 --- 10.0.0.3 ping statistics --- 00:19:26.771 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:26.771 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:19:26.771 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:26.771 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:26.771 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.073 ms 00:19:26.771 00:19:26.771 --- 10.0.0.4 ping statistics --- 00:19:26.771 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:26.771 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:19:26.771 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:26.771 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:26.771 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:19:26.771 00:19:26.771 --- 10.0.0.1 ping statistics --- 00:19:26.771 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:26.771 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:19:26.771 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:26.771 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:26.771 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:19:26.771 00:19:26.771 --- 10.0.0.2 ping statistics --- 00:19:26.771 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:26.771 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:19:26.771 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:26.771 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@461 -- # return 0 00:19:26.771 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:26.771 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:26.771 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:26.771 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:26.771 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:26.771 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:26.771 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:26.771 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:19:26.771 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:26.771 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:26.771 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:26.771 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=78300 00:19:26.771 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:19:26.771 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 78300 00:19:26.771 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 78300 ']' 00:19:26.771 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:26.771 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:26.771 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:26.771 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:26.771 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:27.335 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:27.335 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:19:27.335 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:27.335 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:27.335 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:27.335 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:27.335 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:19:27.335 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:19:27.335 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:19:27.335 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:27.335 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:19:27.335 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:19:27.335 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:19:27.335 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:27.335 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=bf5ddd8ae09c6c0c61c5ca56f41cf98d 00:19:27.335 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:19:27.335 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.ryz 00:19:27.335 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key bf5ddd8ae09c6c0c61c5ca56f41cf98d 0 00:19:27.335 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 bf5ddd8ae09c6c0c61c5ca56f41cf98d 0 00:19:27.335 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:19:27.335 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:27.335 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=bf5ddd8ae09c6c0c61c5ca56f41cf98d 00:19:27.335 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:19:27.335 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:19:27.335 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.ryz 00:19:27.335 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.ryz 00:19:27.335 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.ryz 00:19:27.335 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:19:27.335 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:19:27.335 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:27.336 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:19:27.336 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:19:27.336 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:19:27.336 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:27.336 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=b172249de939aafce71aa772758ed71bdb71b172bd3f5dff9b2c35510a5655df 00:19:27.336 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:19:27.336 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.Dwx 00:19:27.336 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key b172249de939aafce71aa772758ed71bdb71b172bd3f5dff9b2c35510a5655df 3 00:19:27.336 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 b172249de939aafce71aa772758ed71bdb71b172bd3f5dff9b2c35510a5655df 3 00:19:27.336 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:19:27.336 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:27.336 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=b172249de939aafce71aa772758ed71bdb71b172bd3f5dff9b2c35510a5655df 00:19:27.336 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:19:27.336 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:19:27.594 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.Dwx 00:19:27.594 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.Dwx 00:19:27.594 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.Dwx 00:19:27.594 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:19:27.594 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:19:27.594 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:27.594 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:19:27.594 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:19:27.594 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:19:27.594 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:27.594 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=ad594de7c5ee2104ed3b6830893018f6957a201cdd3a175e 00:19:27.594 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:19:27.594 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.4Ph 00:19:27.594 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key ad594de7c5ee2104ed3b6830893018f6957a201cdd3a175e 0 00:19:27.594 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 ad594de7c5ee2104ed3b6830893018f6957a201cdd3a175e 0 00:19:27.594 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:19:27.594 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:27.594 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=ad594de7c5ee2104ed3b6830893018f6957a201cdd3a175e 00:19:27.594 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:19:27.594 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:19:27.594 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.4Ph 00:19:27.594 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.4Ph 00:19:27.594 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.4Ph 00:19:27.594 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:19:27.594 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:19:27.594 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:27.594 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:19:27.594 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:19:27.594 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:19:27.594 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:27.594 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=913e8da7b19f6eca99e337d3d6638f38c9804e1117ff50d3 00:19:27.594 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:19:27.594 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.ugd 00:19:27.594 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 913e8da7b19f6eca99e337d3d6638f38c9804e1117ff50d3 2 00:19:27.594 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 913e8da7b19f6eca99e337d3d6638f38c9804e1117ff50d3 2 00:19:27.594 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:19:27.594 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:27.594 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=913e8da7b19f6eca99e337d3d6638f38c9804e1117ff50d3 00:19:27.594 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:19:27.594 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:19:27.594 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.ugd 00:19:27.594 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.ugd 00:19:27.594 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.ugd 00:19:27.594 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:19:27.594 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:19:27.594 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:27.594 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:19:27.594 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:19:27.594 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:19:27.594 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:27.594 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=fd31508fd3e6abcf5b944a73b76bce05 00:19:27.594 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:19:27.594 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.3ox 00:19:27.594 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key fd31508fd3e6abcf5b944a73b76bce05 1 00:19:27.594 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 fd31508fd3e6abcf5b944a73b76bce05 1 00:19:27.594 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:19:27.594 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:27.594 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=fd31508fd3e6abcf5b944a73b76bce05 00:19:27.594 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:19:27.594 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:19:27.594 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.3ox 00:19:27.594 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.3ox 00:19:27.594 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.3ox 00:19:27.594 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:19:27.594 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:19:27.594 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:27.594 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:19:27.594 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:19:27.594 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:19:27.594 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:27.594 19:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=b7970525f620e32b63c21d2e50ac8fd3 00:19:27.594 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:19:27.594 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.q2U 00:19:27.594 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key b7970525f620e32b63c21d2e50ac8fd3 1 00:19:27.594 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 b7970525f620e32b63c21d2e50ac8fd3 1 00:19:27.594 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:19:27.594 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:27.594 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=b7970525f620e32b63c21d2e50ac8fd3 00:19:27.594 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:19:27.594 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:19:27.852 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.q2U 00:19:27.852 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.q2U 00:19:27.852 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.q2U 00:19:27.852 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:19:27.852 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:19:27.852 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:27.852 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:19:27.852 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:19:27.852 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:19:27.852 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:27.852 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=1a3c8e202b7aae2c46dea8c09d5d79d1a0b195c57ddcb4b7 00:19:27.852 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:19:27.852 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.5Ep 00:19:27.852 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 1a3c8e202b7aae2c46dea8c09d5d79d1a0b195c57ddcb4b7 2 00:19:27.852 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 1a3c8e202b7aae2c46dea8c09d5d79d1a0b195c57ddcb4b7 2 00:19:27.852 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:19:27.852 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:27.852 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=1a3c8e202b7aae2c46dea8c09d5d79d1a0b195c57ddcb4b7 00:19:27.852 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:19:27.852 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:19:27.852 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.5Ep 00:19:27.852 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.5Ep 00:19:27.852 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.5Ep 00:19:27.852 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:19:27.852 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:19:27.852 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:27.852 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:19:27.852 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:19:27.852 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:19:27.852 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:27.852 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=93f5ab16f25e9e3d4dbc1244963329ef 00:19:27.852 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:19:27.853 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.U0l 00:19:27.853 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 93f5ab16f25e9e3d4dbc1244963329ef 0 00:19:27.853 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 93f5ab16f25e9e3d4dbc1244963329ef 0 00:19:27.853 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:19:27.853 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:27.853 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=93f5ab16f25e9e3d4dbc1244963329ef 00:19:27.853 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:19:27.853 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:19:27.853 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.U0l 00:19:27.853 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.U0l 00:19:27.853 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.U0l 00:19:27.853 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:19:27.853 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:19:27.853 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:27.853 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:19:27.853 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:19:27.853 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:19:27.853 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:27.853 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=ffbcb86fd37b1ef6606b9ca94b782fa7c4c50978fdf3e28d0539dff9acc64db9 00:19:27.853 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:19:27.853 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.FIn 00:19:27.853 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key ffbcb86fd37b1ef6606b9ca94b782fa7c4c50978fdf3e28d0539dff9acc64db9 3 00:19:27.853 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 ffbcb86fd37b1ef6606b9ca94b782fa7c4c50978fdf3e28d0539dff9acc64db9 3 00:19:27.853 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:19:27.853 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:27.853 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=ffbcb86fd37b1ef6606b9ca94b782fa7c4c50978fdf3e28d0539dff9acc64db9 00:19:27.853 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:19:27.853 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:19:27.853 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.FIn 00:19:27.853 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.FIn 00:19:27.853 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.FIn 00:19:27.853 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:19:27.853 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 78300 00:19:27.853 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 78300 ']' 00:19:27.853 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:27.853 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:27.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:27.853 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:27.853 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:27.853 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:28.419 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:28.419 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:19:28.419 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:19:28.419 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.ryz 00:19:28.419 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.419 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:28.419 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.419 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.Dwx ]] 00:19:28.419 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Dwx 00:19:28.419 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.419 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:28.419 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.419 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:19:28.419 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.4Ph 00:19:28.419 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.419 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:28.419 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.419 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.ugd ]] 00:19:28.419 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.ugd 00:19:28.419 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.419 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:28.419 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.419 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:19:28.419 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.3ox 00:19:28.419 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.419 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:28.419 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.419 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.q2U ]] 00:19:28.419 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.q2U 00:19:28.419 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.419 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:28.419 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.419 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:19:28.419 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.5Ep 00:19:28.419 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.419 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:28.419 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.419 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.U0l ]] 00:19:28.419 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.U0l 00:19:28.419 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.419 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:28.419 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.419 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:19:28.419 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.FIn 00:19:28.419 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.419 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:28.419 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.419 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:19:28.419 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:19:28.419 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:19:28.419 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:28.419 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:28.419 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:28.419 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:28.419 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:28.419 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:28.419 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:28.420 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:28.420 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:28.420 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:28.420 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:19:28.420 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:19:28.420 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:19:28.420 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:28.420 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:19:28.420 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:19:28.420 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:19:28.420 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:19:28.420 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:19:28.420 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:19:28.420 19:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:19:28.679 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:28.679 Waiting for block devices as requested 00:19:28.679 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:19:28.939 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:19:29.506 19:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:19:29.506 19:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:19:29.506 19:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:19:29.506 19:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:19:29.506 19:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:19:29.506 19:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:29.506 19:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:19:29.506 19:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:19:29.506 19:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:19:29.506 No valid GPT data, bailing 00:19:29.506 19:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:19:29.506 19:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:19:29.506 19:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:19:29.506 19:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:19:29.506 19:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:19:29.506 19:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:19:29.506 19:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:19:29.506 19:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:19:29.506 19:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:19:29.506 19:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:29.506 19:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:19:29.506 19:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:19:29.506 19:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:19:29.506 No valid GPT data, bailing 00:19:29.506 19:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:19:29.506 19:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:19:29.506 19:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:19:29.506 19:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:19:29.506 19:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:19:29.506 19:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:19:29.506 19:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:19:29.506 19:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:19:29.506 19:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:19:29.506 19:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:29.506 19:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:19:29.506 19:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:19:29.506 19:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:19:29.765 No valid GPT data, bailing 00:19:29.765 19:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:19:29.765 19:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:19:29.765 19:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:19:29.765 19:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:19:29.765 19:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:19:29.765 19:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:19:29.765 19:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:19:29.765 19:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:19:29.765 19:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:19:29.765 19:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:29.765 19:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:19:29.765 19:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:19:29.765 19:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:19:29.765 No valid GPT data, bailing 00:19:29.765 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:19:29.765 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:19:29.765 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:19:29.765 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:19:29.765 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:19:29.765 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:29.765 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:19:29.765 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:19:29.765 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:19:29.765 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:19:29.765 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:19:29.765 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:19:29.765 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:19:29.765 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:19:29.765 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:19:29.765 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:19:29.765 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:19:29.765 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --hostid=bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -a 10.0.0.1 -t tcp -s 4420 00:19:29.765 00:19:29.765 Discovery Log Number of Records 2, Generation counter 2 00:19:29.765 =====Discovery Log Entry 0====== 00:19:29.765 trtype: tcp 00:19:29.765 adrfam: ipv4 00:19:29.765 subtype: current discovery subsystem 00:19:29.765 treq: not specified, sq flow control disable supported 00:19:29.765 portid: 1 00:19:29.765 trsvcid: 4420 00:19:29.765 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:19:29.765 traddr: 10.0.0.1 00:19:29.765 eflags: none 00:19:29.765 sectype: none 00:19:29.765 =====Discovery Log Entry 1====== 00:19:29.765 trtype: tcp 00:19:29.765 adrfam: ipv4 00:19:29.765 subtype: nvme subsystem 00:19:29.765 treq: not specified, sq flow control disable supported 00:19:29.765 portid: 1 00:19:29.765 trsvcid: 4420 00:19:29.765 subnqn: nqn.2024-02.io.spdk:cnode0 00:19:29.765 traddr: 10.0.0.1 00:19:29.765 eflags: none 00:19:29.765 sectype: none 00:19:29.765 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:19:29.765 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:19:29.765 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:19:29.765 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:19:29.765 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:29.765 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:29.765 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:29.765 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:29.765 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWQ1OTRkZTdjNWVlMjEwNGVkM2I2ODMwODkzMDE4ZjY5NTdhMjAxY2RkM2ExNzVlq9HaBQ==: 00:19:29.765 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTEzZThkYTdiMTlmNmVjYTk5ZTMzN2QzZDY2MzhmMzhjOTgwNGUxMTE3ZmY1MGQzTOdqdA==: 00:19:29.765 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:29.765 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:30.024 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWQ1OTRkZTdjNWVlMjEwNGVkM2I2ODMwODkzMDE4ZjY5NTdhMjAxY2RkM2ExNzVlq9HaBQ==: 00:19:30.024 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTEzZThkYTdiMTlmNmVjYTk5ZTMzN2QzZDY2MzhmMzhjOTgwNGUxMTE3ZmY1MGQzTOdqdA==: ]] 00:19:30.024 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTEzZThkYTdiMTlmNmVjYTk5ZTMzN2QzZDY2MzhmMzhjOTgwNGUxMTE3ZmY1MGQzTOdqdA==: 00:19:30.024 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:19:30.024 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:19:30.024 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:19:30.024 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:30.024 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:19:30.024 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:30.024 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:19:30.024 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:30.024 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:30.024 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:30.024 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:30.024 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.024 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:30.024 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.024 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:30.024 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:30.024 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:30.024 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:30.024 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:30.024 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:30.024 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:30.024 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:30.024 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:30.024 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:30.024 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:30.024 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:30.024 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.024 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:30.024 nvme0n1 00:19:30.024 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.024 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:30.024 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.024 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:30.024 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:30.024 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.024 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:30.024 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:30.024 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.024 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:30.024 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.024 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:19:30.024 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:30.024 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:30.024 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:19:30.024 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:30.024 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:30.024 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:30.024 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:30.024 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmY1ZGRkOGFlMDljNmMwYzYxYzVjYTU2ZjQxY2Y5OGS5qi5F: 00:19:30.024 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjE3MjI0OWRlOTM5YWFmY2U3MWFhNzcyNzU4ZWQ3MWJkYjcxYjE3MmJkM2Y1ZGZmOWIyYzM1NTEwYTU2NTVkZm9czos=: 00:19:30.024 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:30.024 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:30.024 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmY1ZGRkOGFlMDljNmMwYzYxYzVjYTU2ZjQxY2Y5OGS5qi5F: 00:19:30.024 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjE3MjI0OWRlOTM5YWFmY2U3MWFhNzcyNzU4ZWQ3MWJkYjcxYjE3MmJkM2Y1ZGZmOWIyYzM1NTEwYTU2NTVkZm9czos=: ]] 00:19:30.024 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjE3MjI0OWRlOTM5YWFmY2U3MWFhNzcyNzU4ZWQ3MWJkYjcxYjE3MmJkM2Y1ZGZmOWIyYzM1NTEwYTU2NTVkZm9czos=: 00:19:30.024 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:19:30.024 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:30.024 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:30.024 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:30.024 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:30.024 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:30.024 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:30.024 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.024 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:30.024 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.024 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:30.024 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:30.024 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:30.024 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:30.024 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:30.024 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:30.024 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:30.024 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:30.024 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:30.024 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:30.024 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:30.024 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:30.024 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.024 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:30.284 nvme0n1 00:19:30.284 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.284 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:30.284 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:30.284 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.284 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:30.284 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.284 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:30.284 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:30.284 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.284 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:30.284 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.284 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:30.284 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:19:30.284 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:30.284 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:30.284 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:30.284 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:30.284 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWQ1OTRkZTdjNWVlMjEwNGVkM2I2ODMwODkzMDE4ZjY5NTdhMjAxY2RkM2ExNzVlq9HaBQ==: 00:19:30.284 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTEzZThkYTdiMTlmNmVjYTk5ZTMzN2QzZDY2MzhmMzhjOTgwNGUxMTE3ZmY1MGQzTOdqdA==: 00:19:30.284 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:30.284 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:30.284 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWQ1OTRkZTdjNWVlMjEwNGVkM2I2ODMwODkzMDE4ZjY5NTdhMjAxY2RkM2ExNzVlq9HaBQ==: 00:19:30.284 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTEzZThkYTdiMTlmNmVjYTk5ZTMzN2QzZDY2MzhmMzhjOTgwNGUxMTE3ZmY1MGQzTOdqdA==: ]] 00:19:30.284 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTEzZThkYTdiMTlmNmVjYTk5ZTMzN2QzZDY2MzhmMzhjOTgwNGUxMTE3ZmY1MGQzTOdqdA==: 00:19:30.284 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:19:30.284 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:30.284 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:30.284 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:30.284 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:30.284 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:30.284 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:30.284 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.284 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:30.284 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.284 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:30.284 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:30.284 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:30.284 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:30.284 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:30.284 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:30.284 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:30.284 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:30.284 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:30.284 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:30.284 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:30.284 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:30.284 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.284 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:30.284 nvme0n1 00:19:30.284 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.284 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:30.284 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:30.284 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.284 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:30.543 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.543 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:30.543 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:30.543 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.543 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:30.543 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.543 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:30.543 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:19:30.543 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:30.543 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:30.543 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:30.543 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:30.543 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmQzMTUwOGZkM2U2YWJjZjViOTQ0YTczYjc2YmNlMDXkokTh: 00:19:30.543 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Yjc5NzA1MjVmNjIwZTMyYjYzYzIxZDJlNTBhYzhmZDOIToKK: 00:19:30.543 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:30.543 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:30.543 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmQzMTUwOGZkM2U2YWJjZjViOTQ0YTczYjc2YmNlMDXkokTh: 00:19:30.543 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Yjc5NzA1MjVmNjIwZTMyYjYzYzIxZDJlNTBhYzhmZDOIToKK: ]] 00:19:30.543 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Yjc5NzA1MjVmNjIwZTMyYjYzYzIxZDJlNTBhYzhmZDOIToKK: 00:19:30.543 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:19:30.543 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:30.543 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:30.543 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:30.543 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:30.543 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:30.543 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:30.543 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.543 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:30.543 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.543 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:30.543 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:30.543 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:30.543 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:30.543 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:30.543 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:30.543 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:30.543 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:30.543 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:30.543 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:30.543 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:30.543 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:30.543 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.543 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:30.543 nvme0n1 00:19:30.543 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.543 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:30.543 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:30.543 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.543 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:30.543 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.543 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:30.543 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:30.543 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.543 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:30.543 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.543 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:30.543 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:19:30.543 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:30.543 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:30.543 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:30.543 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:30.543 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWEzYzhlMjAyYjdhYWUyYzQ2ZGVhOGMwOWQ1ZDc5ZDFhMGIxOTVjNTdkZGNiNGI3ac7c6Q==: 00:19:30.543 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTNmNWFiMTZmMjVlOWUzZDRkYmMxMjQ0OTYzMzI5ZWYWuVDw: 00:19:30.543 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:30.543 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:30.543 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWEzYzhlMjAyYjdhYWUyYzQ2ZGVhOGMwOWQ1ZDc5ZDFhMGIxOTVjNTdkZGNiNGI3ac7c6Q==: 00:19:30.543 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTNmNWFiMTZmMjVlOWUzZDRkYmMxMjQ0OTYzMzI5ZWYWuVDw: ]] 00:19:30.543 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTNmNWFiMTZmMjVlOWUzZDRkYmMxMjQ0OTYzMzI5ZWYWuVDw: 00:19:30.543 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:19:30.543 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:30.543 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:30.543 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:30.543 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:30.543 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:30.543 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:30.543 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.543 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:30.543 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.543 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:30.543 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:30.543 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:30.543 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:30.543 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:30.543 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:30.543 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:30.543 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:30.543 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:30.543 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:30.543 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:30.543 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:30.543 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.543 19:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:30.800 nvme0n1 00:19:30.800 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.800 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:30.800 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.800 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:30.800 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:30.800 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.800 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:30.800 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:30.800 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.800 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:30.800 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.800 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:30.800 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:19:30.800 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:30.800 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:30.800 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:30.800 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:30.800 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmZiY2I4NmZkMzdiMWVmNjYwNmI5Y2E5NGI3ODJmYTdjNGM1MDk3OGZkZjNlMjhkMDUzOWRmZjlhY2M2NGRiOaHzdg8=: 00:19:30.800 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:30.800 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:30.800 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:30.800 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmZiY2I4NmZkMzdiMWVmNjYwNmI5Y2E5NGI3ODJmYTdjNGM1MDk3OGZkZjNlMjhkMDUzOWRmZjlhY2M2NGRiOaHzdg8=: 00:19:30.800 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:30.800 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:19:30.800 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:30.800 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:30.800 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:30.800 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:30.800 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:30.800 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:30.800 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.800 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:30.800 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.800 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:30.800 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:30.800 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:30.800 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:30.800 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:30.800 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:30.800 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:30.800 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:30.800 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:30.800 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:30.800 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:30.800 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:30.800 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.800 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:31.057 nvme0n1 00:19:31.057 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.057 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:31.057 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.057 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:31.057 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:31.058 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.058 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:31.058 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:31.058 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.058 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:31.058 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.058 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:31.058 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:31.058 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:19:31.058 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:31.058 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:31.058 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:31.058 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:31.058 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmY1ZGRkOGFlMDljNmMwYzYxYzVjYTU2ZjQxY2Y5OGS5qi5F: 00:19:31.058 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjE3MjI0OWRlOTM5YWFmY2U3MWFhNzcyNzU4ZWQ3MWJkYjcxYjE3MmJkM2Y1ZGZmOWIyYzM1NTEwYTU2NTVkZm9czos=: 00:19:31.058 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:31.058 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:31.316 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmY1ZGRkOGFlMDljNmMwYzYxYzVjYTU2ZjQxY2Y5OGS5qi5F: 00:19:31.316 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjE3MjI0OWRlOTM5YWFmY2U3MWFhNzcyNzU4ZWQ3MWJkYjcxYjE3MmJkM2Y1ZGZmOWIyYzM1NTEwYTU2NTVkZm9czos=: ]] 00:19:31.316 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjE3MjI0OWRlOTM5YWFmY2U3MWFhNzcyNzU4ZWQ3MWJkYjcxYjE3MmJkM2Y1ZGZmOWIyYzM1NTEwYTU2NTVkZm9czos=: 00:19:31.316 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:19:31.316 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:31.316 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:31.316 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:31.316 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:31.316 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:31.316 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:31.316 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.316 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:31.316 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.316 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:31.316 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:31.316 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:31.316 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:31.316 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:31.316 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:31.316 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:31.316 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:31.316 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:31.316 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:31.316 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:31.316 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:31.316 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.316 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:31.574 nvme0n1 00:19:31.574 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.574 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:31.574 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:31.574 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.574 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:31.574 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.574 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:31.574 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:31.574 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.574 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:31.574 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.574 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:31.574 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:19:31.574 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:31.574 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:31.574 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:31.574 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:31.574 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWQ1OTRkZTdjNWVlMjEwNGVkM2I2ODMwODkzMDE4ZjY5NTdhMjAxY2RkM2ExNzVlq9HaBQ==: 00:19:31.574 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTEzZThkYTdiMTlmNmVjYTk5ZTMzN2QzZDY2MzhmMzhjOTgwNGUxMTE3ZmY1MGQzTOdqdA==: 00:19:31.574 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:31.574 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:31.574 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWQ1OTRkZTdjNWVlMjEwNGVkM2I2ODMwODkzMDE4ZjY5NTdhMjAxY2RkM2ExNzVlq9HaBQ==: 00:19:31.574 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTEzZThkYTdiMTlmNmVjYTk5ZTMzN2QzZDY2MzhmMzhjOTgwNGUxMTE3ZmY1MGQzTOdqdA==: ]] 00:19:31.574 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTEzZThkYTdiMTlmNmVjYTk5ZTMzN2QzZDY2MzhmMzhjOTgwNGUxMTE3ZmY1MGQzTOdqdA==: 00:19:31.574 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:19:31.574 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:31.574 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:31.574 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:31.574 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:31.574 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:31.574 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:31.574 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.574 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:31.574 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.574 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:31.574 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:31.574 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:31.574 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:31.574 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:31.574 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:31.574 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:31.574 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:31.574 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:31.574 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:31.574 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:31.574 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:31.574 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.574 19:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:31.833 nvme0n1 00:19:31.833 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.833 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:31.833 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.833 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:31.833 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:31.833 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.833 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:31.833 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:31.833 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.833 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:31.833 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.833 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:31.833 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:19:31.833 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:31.833 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:31.833 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:31.833 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:31.833 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmQzMTUwOGZkM2U2YWJjZjViOTQ0YTczYjc2YmNlMDXkokTh: 00:19:31.833 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Yjc5NzA1MjVmNjIwZTMyYjYzYzIxZDJlNTBhYzhmZDOIToKK: 00:19:31.833 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:31.833 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:31.833 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmQzMTUwOGZkM2U2YWJjZjViOTQ0YTczYjc2YmNlMDXkokTh: 00:19:31.833 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Yjc5NzA1MjVmNjIwZTMyYjYzYzIxZDJlNTBhYzhmZDOIToKK: ]] 00:19:31.833 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Yjc5NzA1MjVmNjIwZTMyYjYzYzIxZDJlNTBhYzhmZDOIToKK: 00:19:31.833 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:19:31.833 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:31.833 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:31.833 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:31.833 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:31.833 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:31.833 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:31.833 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.833 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:31.833 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.833 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:31.833 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:31.833 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:31.833 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:31.833 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:31.833 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:31.833 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:31.833 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:31.833 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:31.833 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:31.833 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:31.833 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:31.833 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.833 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:31.833 nvme0n1 00:19:31.833 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.833 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:31.834 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.834 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:31.834 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:31.834 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.834 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:31.834 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:31.834 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.834 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:32.094 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.094 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:32.094 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:19:32.094 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:32.094 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:32.094 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:32.094 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:32.094 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWEzYzhlMjAyYjdhYWUyYzQ2ZGVhOGMwOWQ1ZDc5ZDFhMGIxOTVjNTdkZGNiNGI3ac7c6Q==: 00:19:32.094 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTNmNWFiMTZmMjVlOWUzZDRkYmMxMjQ0OTYzMzI5ZWYWuVDw: 00:19:32.094 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:32.094 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:32.094 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWEzYzhlMjAyYjdhYWUyYzQ2ZGVhOGMwOWQ1ZDc5ZDFhMGIxOTVjNTdkZGNiNGI3ac7c6Q==: 00:19:32.094 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTNmNWFiMTZmMjVlOWUzZDRkYmMxMjQ0OTYzMzI5ZWYWuVDw: ]] 00:19:32.094 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTNmNWFiMTZmMjVlOWUzZDRkYmMxMjQ0OTYzMzI5ZWYWuVDw: 00:19:32.094 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:19:32.094 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:32.094 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:32.094 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:32.094 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:32.094 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:32.094 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:32.094 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.094 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:32.094 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.094 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:32.094 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:32.094 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:32.094 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:32.094 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:32.094 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:32.094 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:32.094 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:32.094 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:32.094 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:32.094 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:32.094 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:32.094 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.094 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:32.094 nvme0n1 00:19:32.094 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.094 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:32.094 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.094 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:32.094 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:32.094 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.094 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:32.094 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:32.094 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.094 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:32.094 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.094 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:32.094 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:19:32.094 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:32.094 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:32.094 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:32.094 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:32.094 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmZiY2I4NmZkMzdiMWVmNjYwNmI5Y2E5NGI3ODJmYTdjNGM1MDk3OGZkZjNlMjhkMDUzOWRmZjlhY2M2NGRiOaHzdg8=: 00:19:32.094 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:32.094 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:32.094 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:32.094 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmZiY2I4NmZkMzdiMWVmNjYwNmI5Y2E5NGI3ODJmYTdjNGM1MDk3OGZkZjNlMjhkMDUzOWRmZjlhY2M2NGRiOaHzdg8=: 00:19:32.095 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:32.095 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:19:32.095 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:32.095 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:32.095 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:32.095 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:32.095 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:32.095 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:32.095 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.095 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:32.095 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.095 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:32.095 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:32.095 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:32.095 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:32.095 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:32.095 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:32.095 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:32.095 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:32.095 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:32.095 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:32.095 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:32.095 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:32.095 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.095 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:32.355 nvme0n1 00:19:32.355 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.355 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:32.355 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.355 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:32.355 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:32.355 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.355 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:32.355 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:32.355 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.355 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:32.355 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.355 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:32.355 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:32.355 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:19:32.355 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:32.355 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:32.355 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:32.355 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:32.355 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmY1ZGRkOGFlMDljNmMwYzYxYzVjYTU2ZjQxY2Y5OGS5qi5F: 00:19:32.355 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjE3MjI0OWRlOTM5YWFmY2U3MWFhNzcyNzU4ZWQ3MWJkYjcxYjE3MmJkM2Y1ZGZmOWIyYzM1NTEwYTU2NTVkZm9czos=: 00:19:32.355 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:32.355 19:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:32.924 19:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmY1ZGRkOGFlMDljNmMwYzYxYzVjYTU2ZjQxY2Y5OGS5qi5F: 00:19:32.924 19:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjE3MjI0OWRlOTM5YWFmY2U3MWFhNzcyNzU4ZWQ3MWJkYjcxYjE3MmJkM2Y1ZGZmOWIyYzM1NTEwYTU2NTVkZm9czos=: ]] 00:19:32.924 19:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjE3MjI0OWRlOTM5YWFmY2U3MWFhNzcyNzU4ZWQ3MWJkYjcxYjE3MmJkM2Y1ZGZmOWIyYzM1NTEwYTU2NTVkZm9czos=: 00:19:32.924 19:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:19:32.924 19:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:32.924 19:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:32.924 19:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:32.924 19:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:32.924 19:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:32.924 19:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:32.924 19:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.924 19:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:32.924 19:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.924 19:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:32.924 19:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:32.924 19:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:32.924 19:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:32.924 19:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:32.924 19:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:32.924 19:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:32.924 19:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:32.924 19:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:32.924 19:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:32.924 19:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:32.924 19:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:32.924 19:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.924 19:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:33.182 nvme0n1 00:19:33.182 19:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.182 19:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:33.182 19:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.182 19:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:33.182 19:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:33.182 19:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.182 19:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:33.182 19:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:33.182 19:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.182 19:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:33.182 19:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.182 19:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:33.182 19:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:19:33.182 19:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:33.182 19:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:33.182 19:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:33.183 19:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:33.183 19:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWQ1OTRkZTdjNWVlMjEwNGVkM2I2ODMwODkzMDE4ZjY5NTdhMjAxY2RkM2ExNzVlq9HaBQ==: 00:19:33.183 19:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTEzZThkYTdiMTlmNmVjYTk5ZTMzN2QzZDY2MzhmMzhjOTgwNGUxMTE3ZmY1MGQzTOdqdA==: 00:19:33.183 19:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:33.183 19:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:33.183 19:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWQ1OTRkZTdjNWVlMjEwNGVkM2I2ODMwODkzMDE4ZjY5NTdhMjAxY2RkM2ExNzVlq9HaBQ==: 00:19:33.183 19:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTEzZThkYTdiMTlmNmVjYTk5ZTMzN2QzZDY2MzhmMzhjOTgwNGUxMTE3ZmY1MGQzTOdqdA==: ]] 00:19:33.183 19:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTEzZThkYTdiMTlmNmVjYTk5ZTMzN2QzZDY2MzhmMzhjOTgwNGUxMTE3ZmY1MGQzTOdqdA==: 00:19:33.183 19:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:19:33.183 19:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:33.183 19:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:33.183 19:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:33.183 19:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:33.183 19:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:33.183 19:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:33.183 19:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.183 19:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:33.183 19:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.183 19:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:33.183 19:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:33.183 19:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:33.183 19:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:33.183 19:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:33.183 19:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:33.183 19:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:33.183 19:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:33.183 19:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:33.183 19:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:33.183 19:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:33.183 19:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:33.183 19:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.183 19:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:33.441 nvme0n1 00:19:33.441 19:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.441 19:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:33.441 19:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:33.441 19:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.441 19:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:33.441 19:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.441 19:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:33.441 19:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:33.441 19:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.441 19:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:33.441 19:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.441 19:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:33.441 19:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:19:33.441 19:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:33.441 19:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:33.442 19:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:33.442 19:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:33.442 19:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmQzMTUwOGZkM2U2YWJjZjViOTQ0YTczYjc2YmNlMDXkokTh: 00:19:33.442 19:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Yjc5NzA1MjVmNjIwZTMyYjYzYzIxZDJlNTBhYzhmZDOIToKK: 00:19:33.442 19:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:33.442 19:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:33.442 19:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmQzMTUwOGZkM2U2YWJjZjViOTQ0YTczYjc2YmNlMDXkokTh: 00:19:33.442 19:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Yjc5NzA1MjVmNjIwZTMyYjYzYzIxZDJlNTBhYzhmZDOIToKK: ]] 00:19:33.442 19:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Yjc5NzA1MjVmNjIwZTMyYjYzYzIxZDJlNTBhYzhmZDOIToKK: 00:19:33.442 19:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:19:33.442 19:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:33.442 19:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:33.442 19:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:33.442 19:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:33.442 19:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:33.442 19:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:33.442 19:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.442 19:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:33.442 19:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.442 19:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:33.442 19:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:33.442 19:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:33.442 19:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:33.442 19:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:33.442 19:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:33.442 19:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:33.442 19:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:33.442 19:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:33.442 19:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:33.442 19:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:33.442 19:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:33.442 19:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.442 19:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:33.701 nvme0n1 00:19:33.701 19:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.701 19:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:33.701 19:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:33.701 19:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.701 19:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:33.701 19:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.701 19:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:33.701 19:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:33.701 19:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.701 19:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:33.701 19:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.701 19:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:33.701 19:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:19:33.701 19:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:33.701 19:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:33.701 19:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:33.701 19:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:33.701 19:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWEzYzhlMjAyYjdhYWUyYzQ2ZGVhOGMwOWQ1ZDc5ZDFhMGIxOTVjNTdkZGNiNGI3ac7c6Q==: 00:19:33.701 19:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTNmNWFiMTZmMjVlOWUzZDRkYmMxMjQ0OTYzMzI5ZWYWuVDw: 00:19:33.701 19:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:33.701 19:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:33.701 19:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWEzYzhlMjAyYjdhYWUyYzQ2ZGVhOGMwOWQ1ZDc5ZDFhMGIxOTVjNTdkZGNiNGI3ac7c6Q==: 00:19:33.701 19:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTNmNWFiMTZmMjVlOWUzZDRkYmMxMjQ0OTYzMzI5ZWYWuVDw: ]] 00:19:33.701 19:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTNmNWFiMTZmMjVlOWUzZDRkYmMxMjQ0OTYzMzI5ZWYWuVDw: 00:19:33.701 19:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:19:33.701 19:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:33.701 19:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:33.701 19:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:33.701 19:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:33.701 19:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:33.701 19:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:33.701 19:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.701 19:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:33.960 19:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.960 19:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:33.960 19:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:33.960 19:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:33.960 19:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:33.960 19:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:33.960 19:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:33.960 19:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:33.960 19:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:33.960 19:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:33.960 19:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:33.960 19:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:33.960 19:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:33.960 19:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.960 19:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:33.960 nvme0n1 00:19:33.960 19:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.960 19:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:33.960 19:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.960 19:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:33.960 19:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:33.960 19:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.960 19:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:33.960 19:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:33.960 19:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.960 19:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:34.273 19:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.273 19:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:34.273 19:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:19:34.273 19:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:34.273 19:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:34.273 19:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:34.273 19:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:34.273 19:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmZiY2I4NmZkMzdiMWVmNjYwNmI5Y2E5NGI3ODJmYTdjNGM1MDk3OGZkZjNlMjhkMDUzOWRmZjlhY2M2NGRiOaHzdg8=: 00:19:34.273 19:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:34.273 19:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:34.273 19:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:34.273 19:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmZiY2I4NmZkMzdiMWVmNjYwNmI5Y2E5NGI3ODJmYTdjNGM1MDk3OGZkZjNlMjhkMDUzOWRmZjlhY2M2NGRiOaHzdg8=: 00:19:34.273 19:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:34.273 19:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:19:34.273 19:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:34.273 19:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:34.273 19:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:34.273 19:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:34.273 19:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:34.273 19:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:34.273 19:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.273 19:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:34.273 19:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.273 19:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:34.273 19:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:34.273 19:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:34.273 19:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:34.273 19:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:34.273 19:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:34.273 19:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:34.273 19:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:34.273 19:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:34.273 19:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:34.273 19:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:34.273 19:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:34.273 19:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.273 19:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:34.273 nvme0n1 00:19:34.273 19:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.273 19:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:34.273 19:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.273 19:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:34.273 19:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:34.273 19:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.273 19:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:34.273 19:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:34.273 19:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.273 19:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:34.273 19:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.273 19:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:34.273 19:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:34.273 19:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:19:34.273 19:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:34.273 19:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:34.273 19:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:34.273 19:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:34.273 19:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmY1ZGRkOGFlMDljNmMwYzYxYzVjYTU2ZjQxY2Y5OGS5qi5F: 00:19:34.274 19:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjE3MjI0OWRlOTM5YWFmY2U3MWFhNzcyNzU4ZWQ3MWJkYjcxYjE3MmJkM2Y1ZGZmOWIyYzM1NTEwYTU2NTVkZm9czos=: 00:19:34.274 19:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:34.274 19:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:36.187 19:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmY1ZGRkOGFlMDljNmMwYzYxYzVjYTU2ZjQxY2Y5OGS5qi5F: 00:19:36.187 19:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjE3MjI0OWRlOTM5YWFmY2U3MWFhNzcyNzU4ZWQ3MWJkYjcxYjE3MmJkM2Y1ZGZmOWIyYzM1NTEwYTU2NTVkZm9czos=: ]] 00:19:36.187 19:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjE3MjI0OWRlOTM5YWFmY2U3MWFhNzcyNzU4ZWQ3MWJkYjcxYjE3MmJkM2Y1ZGZmOWIyYzM1NTEwYTU2NTVkZm9czos=: 00:19:36.187 19:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:19:36.187 19:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:36.187 19:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:36.187 19:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:36.187 19:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:36.187 19:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:36.187 19:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:36.187 19:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.187 19:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:36.187 19:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.187 19:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:36.187 19:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:36.187 19:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:36.187 19:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:36.187 19:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:36.187 19:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:36.187 19:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:36.187 19:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:36.187 19:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:36.187 19:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:36.187 19:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:36.187 19:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:36.187 19:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.187 19:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:36.446 nvme0n1 00:19:36.446 19:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.446 19:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:36.446 19:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:36.446 19:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.446 19:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:36.446 19:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.706 19:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.706 19:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:36.706 19:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.706 19:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:36.706 19:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.706 19:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:36.706 19:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:19:36.706 19:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:36.706 19:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:36.706 19:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:36.706 19:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:36.706 19:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWQ1OTRkZTdjNWVlMjEwNGVkM2I2ODMwODkzMDE4ZjY5NTdhMjAxY2RkM2ExNzVlq9HaBQ==: 00:19:36.706 19:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTEzZThkYTdiMTlmNmVjYTk5ZTMzN2QzZDY2MzhmMzhjOTgwNGUxMTE3ZmY1MGQzTOdqdA==: 00:19:36.706 19:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:36.706 19:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:36.706 19:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWQ1OTRkZTdjNWVlMjEwNGVkM2I2ODMwODkzMDE4ZjY5NTdhMjAxY2RkM2ExNzVlq9HaBQ==: 00:19:36.706 19:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTEzZThkYTdiMTlmNmVjYTk5ZTMzN2QzZDY2MzhmMzhjOTgwNGUxMTE3ZmY1MGQzTOdqdA==: ]] 00:19:36.706 19:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTEzZThkYTdiMTlmNmVjYTk5ZTMzN2QzZDY2MzhmMzhjOTgwNGUxMTE3ZmY1MGQzTOdqdA==: 00:19:36.706 19:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:19:36.706 19:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:36.706 19:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:36.706 19:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:36.706 19:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:36.706 19:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:36.706 19:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:36.706 19:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.706 19:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:36.706 19:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.706 19:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:36.706 19:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:36.706 19:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:36.706 19:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:36.706 19:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:36.706 19:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:36.706 19:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:36.706 19:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:36.706 19:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:36.706 19:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:36.706 19:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:36.706 19:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:36.706 19:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.706 19:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:36.966 nvme0n1 00:19:36.966 19:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.966 19:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:36.966 19:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.966 19:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:36.966 19:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:36.966 19:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.966 19:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.966 19:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:36.966 19:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.967 19:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:36.967 19:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.967 19:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:36.967 19:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:19:36.967 19:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:36.967 19:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:36.967 19:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:36.967 19:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:36.967 19:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmQzMTUwOGZkM2U2YWJjZjViOTQ0YTczYjc2YmNlMDXkokTh: 00:19:36.967 19:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Yjc5NzA1MjVmNjIwZTMyYjYzYzIxZDJlNTBhYzhmZDOIToKK: 00:19:36.967 19:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:36.967 19:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:36.967 19:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmQzMTUwOGZkM2U2YWJjZjViOTQ0YTczYjc2YmNlMDXkokTh: 00:19:36.967 19:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Yjc5NzA1MjVmNjIwZTMyYjYzYzIxZDJlNTBhYzhmZDOIToKK: ]] 00:19:36.967 19:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Yjc5NzA1MjVmNjIwZTMyYjYzYzIxZDJlNTBhYzhmZDOIToKK: 00:19:36.967 19:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:19:36.967 19:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:36.967 19:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:36.967 19:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:36.967 19:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:36.967 19:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:36.967 19:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:36.967 19:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.967 19:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:36.967 19:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.967 19:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:36.967 19:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:36.967 19:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:36.967 19:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:36.967 19:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:36.967 19:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:36.967 19:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:36.967 19:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:36.967 19:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:36.967 19:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:36.967 19:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:36.967 19:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:36.967 19:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.967 19:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:37.536 nvme0n1 00:19:37.536 19:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.536 19:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:37.536 19:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:37.536 19:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.536 19:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:37.536 19:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.536 19:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.536 19:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:37.536 19:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.536 19:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:37.536 19:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.536 19:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:37.536 19:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:19:37.536 19:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:37.536 19:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:37.536 19:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:37.536 19:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:37.536 19:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWEzYzhlMjAyYjdhYWUyYzQ2ZGVhOGMwOWQ1ZDc5ZDFhMGIxOTVjNTdkZGNiNGI3ac7c6Q==: 00:19:37.536 19:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTNmNWFiMTZmMjVlOWUzZDRkYmMxMjQ0OTYzMzI5ZWYWuVDw: 00:19:37.536 19:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:37.536 19:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:37.536 19:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWEzYzhlMjAyYjdhYWUyYzQ2ZGVhOGMwOWQ1ZDc5ZDFhMGIxOTVjNTdkZGNiNGI3ac7c6Q==: 00:19:37.536 19:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTNmNWFiMTZmMjVlOWUzZDRkYmMxMjQ0OTYzMzI5ZWYWuVDw: ]] 00:19:37.536 19:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTNmNWFiMTZmMjVlOWUzZDRkYmMxMjQ0OTYzMzI5ZWYWuVDw: 00:19:37.536 19:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:19:37.536 19:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:37.536 19:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:37.536 19:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:37.536 19:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:37.536 19:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:37.536 19:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:37.536 19:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.536 19:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:37.536 19:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.536 19:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:37.536 19:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:37.536 19:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:37.536 19:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:37.536 19:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:37.536 19:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:37.536 19:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:37.536 19:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:37.536 19:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:37.536 19:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:37.536 19:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:37.536 19:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:37.536 19:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.536 19:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:37.795 nvme0n1 00:19:37.795 19:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.795 19:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:37.795 19:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.795 19:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:37.795 19:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:37.795 19:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.795 19:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.795 19:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:37.795 19:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.795 19:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:37.795 19:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.795 19:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:37.795 19:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:19:37.795 19:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:37.795 19:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:37.795 19:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:37.795 19:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:37.795 19:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmZiY2I4NmZkMzdiMWVmNjYwNmI5Y2E5NGI3ODJmYTdjNGM1MDk3OGZkZjNlMjhkMDUzOWRmZjlhY2M2NGRiOaHzdg8=: 00:19:37.795 19:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:37.795 19:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:37.795 19:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:37.795 19:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmZiY2I4NmZkMzdiMWVmNjYwNmI5Y2E5NGI3ODJmYTdjNGM1MDk3OGZkZjNlMjhkMDUzOWRmZjlhY2M2NGRiOaHzdg8=: 00:19:37.795 19:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:37.795 19:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:19:37.795 19:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:37.795 19:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:37.795 19:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:37.795 19:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:37.796 19:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:37.796 19:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:37.796 19:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.796 19:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:37.796 19:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.796 19:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:37.796 19:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:37.796 19:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:37.796 19:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:37.796 19:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:37.796 19:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:37.796 19:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:37.796 19:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:37.796 19:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:37.796 19:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:37.796 19:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:37.796 19:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:37.796 19:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.796 19:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:38.365 nvme0n1 00:19:38.365 19:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.365 19:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:38.365 19:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.365 19:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:38.365 19:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:38.365 19:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.365 19:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.365 19:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:38.365 19:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.365 19:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:38.365 19:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.365 19:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:38.365 19:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:38.365 19:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:19:38.365 19:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:38.365 19:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:38.365 19:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:38.365 19:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:38.365 19:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmY1ZGRkOGFlMDljNmMwYzYxYzVjYTU2ZjQxY2Y5OGS5qi5F: 00:19:38.365 19:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjE3MjI0OWRlOTM5YWFmY2U3MWFhNzcyNzU4ZWQ3MWJkYjcxYjE3MmJkM2Y1ZGZmOWIyYzM1NTEwYTU2NTVkZm9czos=: 00:19:38.365 19:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:38.365 19:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:38.365 19:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmY1ZGRkOGFlMDljNmMwYzYxYzVjYTU2ZjQxY2Y5OGS5qi5F: 00:19:38.365 19:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjE3MjI0OWRlOTM5YWFmY2U3MWFhNzcyNzU4ZWQ3MWJkYjcxYjE3MmJkM2Y1ZGZmOWIyYzM1NTEwYTU2NTVkZm9czos=: ]] 00:19:38.365 19:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjE3MjI0OWRlOTM5YWFmY2U3MWFhNzcyNzU4ZWQ3MWJkYjcxYjE3MmJkM2Y1ZGZmOWIyYzM1NTEwYTU2NTVkZm9czos=: 00:19:38.365 19:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:19:38.365 19:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:38.365 19:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:38.365 19:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:38.365 19:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:38.365 19:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:38.365 19:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:38.365 19:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.365 19:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:38.365 19:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.365 19:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:38.365 19:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:38.365 19:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:38.365 19:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:38.365 19:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:38.365 19:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:38.365 19:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:38.365 19:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:38.365 19:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:38.365 19:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:38.365 19:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:38.365 19:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:38.365 19:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.365 19:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:38.933 nvme0n1 00:19:38.933 19:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.933 19:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:38.933 19:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.933 19:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:38.933 19:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:38.933 19:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.933 19:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.933 19:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:38.933 19:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.933 19:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:38.933 19:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.933 19:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:38.933 19:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:19:38.933 19:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:38.933 19:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:38.933 19:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:38.933 19:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:38.933 19:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWQ1OTRkZTdjNWVlMjEwNGVkM2I2ODMwODkzMDE4ZjY5NTdhMjAxY2RkM2ExNzVlq9HaBQ==: 00:19:38.933 19:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTEzZThkYTdiMTlmNmVjYTk5ZTMzN2QzZDY2MzhmMzhjOTgwNGUxMTE3ZmY1MGQzTOdqdA==: 00:19:38.933 19:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:38.933 19:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:38.933 19:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWQ1OTRkZTdjNWVlMjEwNGVkM2I2ODMwODkzMDE4ZjY5NTdhMjAxY2RkM2ExNzVlq9HaBQ==: 00:19:38.933 19:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTEzZThkYTdiMTlmNmVjYTk5ZTMzN2QzZDY2MzhmMzhjOTgwNGUxMTE3ZmY1MGQzTOdqdA==: ]] 00:19:38.933 19:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTEzZThkYTdiMTlmNmVjYTk5ZTMzN2QzZDY2MzhmMzhjOTgwNGUxMTE3ZmY1MGQzTOdqdA==: 00:19:38.933 19:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:19:38.933 19:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:38.933 19:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:38.933 19:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:38.933 19:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:38.933 19:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:38.933 19:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:38.933 19:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.933 19:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:38.933 19:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.933 19:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:38.933 19:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:38.933 19:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:38.933 19:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:38.933 19:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:38.933 19:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:38.933 19:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:38.933 19:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:38.933 19:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:38.933 19:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:38.933 19:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:38.933 19:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:38.933 19:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.933 19:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:39.869 nvme0n1 00:19:39.869 19:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.869 19:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:39.869 19:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:39.869 19:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.869 19:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:39.869 19:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.869 19:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:39.869 19:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:39.869 19:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.869 19:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:39.869 19:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.869 19:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:39.869 19:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:19:39.869 19:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:39.869 19:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:39.869 19:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:39.869 19:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:39.869 19:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmQzMTUwOGZkM2U2YWJjZjViOTQ0YTczYjc2YmNlMDXkokTh: 00:19:39.869 19:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Yjc5NzA1MjVmNjIwZTMyYjYzYzIxZDJlNTBhYzhmZDOIToKK: 00:19:39.869 19:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:39.869 19:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:39.869 19:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmQzMTUwOGZkM2U2YWJjZjViOTQ0YTczYjc2YmNlMDXkokTh: 00:19:39.869 19:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Yjc5NzA1MjVmNjIwZTMyYjYzYzIxZDJlNTBhYzhmZDOIToKK: ]] 00:19:39.869 19:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Yjc5NzA1MjVmNjIwZTMyYjYzYzIxZDJlNTBhYzhmZDOIToKK: 00:19:39.869 19:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:19:39.869 19:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:39.869 19:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:39.869 19:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:39.869 19:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:39.869 19:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:39.869 19:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:39.869 19:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.869 19:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:39.869 19:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.869 19:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:39.869 19:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:39.869 19:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:39.869 19:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:39.869 19:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:39.869 19:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:39.869 19:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:39.869 19:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:39.869 19:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:39.869 19:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:39.869 19:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:39.869 19:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:39.869 19:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.869 19:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:40.194 nvme0n1 00:19:40.194 19:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.194 19:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:40.194 19:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.194 19:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:40.194 19:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:40.478 19:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.478 19:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:40.478 19:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:40.478 19:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.478 19:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:40.478 19:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.478 19:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:40.478 19:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:19:40.478 19:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:40.478 19:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:40.478 19:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:40.478 19:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:40.478 19:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWEzYzhlMjAyYjdhYWUyYzQ2ZGVhOGMwOWQ1ZDc5ZDFhMGIxOTVjNTdkZGNiNGI3ac7c6Q==: 00:19:40.478 19:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTNmNWFiMTZmMjVlOWUzZDRkYmMxMjQ0OTYzMzI5ZWYWuVDw: 00:19:40.478 19:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:40.478 19:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:40.478 19:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWEzYzhlMjAyYjdhYWUyYzQ2ZGVhOGMwOWQ1ZDc5ZDFhMGIxOTVjNTdkZGNiNGI3ac7c6Q==: 00:19:40.478 19:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTNmNWFiMTZmMjVlOWUzZDRkYmMxMjQ0OTYzMzI5ZWYWuVDw: ]] 00:19:40.478 19:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTNmNWFiMTZmMjVlOWUzZDRkYmMxMjQ0OTYzMzI5ZWYWuVDw: 00:19:40.478 19:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:19:40.478 19:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:40.478 19:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:40.478 19:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:40.478 19:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:40.478 19:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:40.478 19:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:40.478 19:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.478 19:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:40.478 19:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.478 19:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:40.478 19:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:40.478 19:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:40.478 19:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:40.478 19:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:40.478 19:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:40.478 19:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:40.478 19:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:40.478 19:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:40.478 19:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:40.478 19:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:40.478 19:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:40.478 19:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.478 19:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:41.046 nvme0n1 00:19:41.046 19:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.046 19:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:41.046 19:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:41.046 19:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.046 19:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:41.046 19:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.046 19:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:41.046 19:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:41.046 19:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.046 19:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:41.046 19:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.046 19:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:41.046 19:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:19:41.046 19:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:41.046 19:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:41.046 19:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:41.046 19:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:41.046 19:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmZiY2I4NmZkMzdiMWVmNjYwNmI5Y2E5NGI3ODJmYTdjNGM1MDk3OGZkZjNlMjhkMDUzOWRmZjlhY2M2NGRiOaHzdg8=: 00:19:41.046 19:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:41.046 19:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:41.046 19:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:41.046 19:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmZiY2I4NmZkMzdiMWVmNjYwNmI5Y2E5NGI3ODJmYTdjNGM1MDk3OGZkZjNlMjhkMDUzOWRmZjlhY2M2NGRiOaHzdg8=: 00:19:41.046 19:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:41.046 19:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:19:41.046 19:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:41.046 19:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:41.046 19:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:41.046 19:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:41.046 19:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:41.046 19:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:41.046 19:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.046 19:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:41.046 19:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.046 19:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:41.046 19:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:41.046 19:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:41.046 19:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:41.046 19:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:41.046 19:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:41.046 19:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:41.046 19:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:41.046 19:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:41.046 19:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:41.046 19:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:41.046 19:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:41.046 19:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.046 19:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:41.614 nvme0n1 00:19:41.614 19:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.614 19:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:41.614 19:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.614 19:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:41.614 19:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:41.614 19:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.614 19:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:41.614 19:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:41.614 19:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.614 19:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:41.614 19:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.614 19:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:19:41.614 19:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:41.614 19:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:41.614 19:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:19:41.614 19:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:41.614 19:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:41.614 19:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:41.614 19:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:41.614 19:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmY1ZGRkOGFlMDljNmMwYzYxYzVjYTU2ZjQxY2Y5OGS5qi5F: 00:19:41.614 19:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjE3MjI0OWRlOTM5YWFmY2U3MWFhNzcyNzU4ZWQ3MWJkYjcxYjE3MmJkM2Y1ZGZmOWIyYzM1NTEwYTU2NTVkZm9czos=: 00:19:41.614 19:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:41.614 19:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:41.614 19:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmY1ZGRkOGFlMDljNmMwYzYxYzVjYTU2ZjQxY2Y5OGS5qi5F: 00:19:41.614 19:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjE3MjI0OWRlOTM5YWFmY2U3MWFhNzcyNzU4ZWQ3MWJkYjcxYjE3MmJkM2Y1ZGZmOWIyYzM1NTEwYTU2NTVkZm9czos=: ]] 00:19:41.614 19:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjE3MjI0OWRlOTM5YWFmY2U3MWFhNzcyNzU4ZWQ3MWJkYjcxYjE3MmJkM2Y1ZGZmOWIyYzM1NTEwYTU2NTVkZm9czos=: 00:19:41.614 19:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:19:41.614 19:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:41.614 19:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:41.614 19:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:41.614 19:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:41.614 19:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:41.614 19:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:41.615 19:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.615 19:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:41.615 19:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.615 19:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:41.615 19:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:41.615 19:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:41.615 19:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:41.615 19:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:41.615 19:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:41.615 19:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:41.615 19:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:41.615 19:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:41.615 19:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:41.615 19:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:41.615 19:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:41.615 19:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.615 19:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:41.889 nvme0n1 00:19:41.889 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.889 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:41.889 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.889 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:41.889 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:41.889 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.889 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:41.889 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:41.889 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.889 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:41.889 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.889 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:41.889 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:19:41.889 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:41.889 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:41.889 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:41.889 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:41.889 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWQ1OTRkZTdjNWVlMjEwNGVkM2I2ODMwODkzMDE4ZjY5NTdhMjAxY2RkM2ExNzVlq9HaBQ==: 00:19:41.889 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTEzZThkYTdiMTlmNmVjYTk5ZTMzN2QzZDY2MzhmMzhjOTgwNGUxMTE3ZmY1MGQzTOdqdA==: 00:19:41.889 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:41.889 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:41.889 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWQ1OTRkZTdjNWVlMjEwNGVkM2I2ODMwODkzMDE4ZjY5NTdhMjAxY2RkM2ExNzVlq9HaBQ==: 00:19:41.889 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTEzZThkYTdiMTlmNmVjYTk5ZTMzN2QzZDY2MzhmMzhjOTgwNGUxMTE3ZmY1MGQzTOdqdA==: ]] 00:19:41.889 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTEzZThkYTdiMTlmNmVjYTk5ZTMzN2QzZDY2MzhmMzhjOTgwNGUxMTE3ZmY1MGQzTOdqdA==: 00:19:41.889 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:19:41.889 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:41.889 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:41.889 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:41.889 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:41.889 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:41.889 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:41.889 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.889 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:41.889 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.889 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:41.889 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:41.889 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:41.889 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:41.889 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:41.889 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:41.889 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:41.889 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:41.889 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:41.889 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:41.889 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:41.889 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:41.889 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.889 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:41.889 nvme0n1 00:19:41.889 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.889 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:41.889 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.889 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:41.889 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:41.889 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.889 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:41.889 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:41.889 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.889 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:41.889 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.889 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:41.889 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:19:41.889 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:41.889 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:41.889 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:41.889 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:41.889 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmQzMTUwOGZkM2U2YWJjZjViOTQ0YTczYjc2YmNlMDXkokTh: 00:19:41.889 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Yjc5NzA1MjVmNjIwZTMyYjYzYzIxZDJlNTBhYzhmZDOIToKK: 00:19:41.889 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:41.889 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:41.889 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmQzMTUwOGZkM2U2YWJjZjViOTQ0YTczYjc2YmNlMDXkokTh: 00:19:41.889 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Yjc5NzA1MjVmNjIwZTMyYjYzYzIxZDJlNTBhYzhmZDOIToKK: ]] 00:19:41.889 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Yjc5NzA1MjVmNjIwZTMyYjYzYzIxZDJlNTBhYzhmZDOIToKK: 00:19:41.889 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:19:41.889 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:41.889 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:41.889 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:41.889 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:41.889 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:41.889 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:41.889 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.148 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:42.148 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.148 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:42.148 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:42.148 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:42.148 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:42.148 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:42.148 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:42.148 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:42.148 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:42.148 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:42.148 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:42.148 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:42.148 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:42.148 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.148 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:42.148 nvme0n1 00:19:42.148 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.148 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:42.148 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:42.148 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.148 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:42.148 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.148 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:42.148 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:42.148 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.148 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:42.148 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.148 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:42.148 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:19:42.148 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:42.148 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:42.148 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:42.148 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:42.148 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWEzYzhlMjAyYjdhYWUyYzQ2ZGVhOGMwOWQ1ZDc5ZDFhMGIxOTVjNTdkZGNiNGI3ac7c6Q==: 00:19:42.148 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTNmNWFiMTZmMjVlOWUzZDRkYmMxMjQ0OTYzMzI5ZWYWuVDw: 00:19:42.148 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:42.148 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:42.148 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWEzYzhlMjAyYjdhYWUyYzQ2ZGVhOGMwOWQ1ZDc5ZDFhMGIxOTVjNTdkZGNiNGI3ac7c6Q==: 00:19:42.148 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTNmNWFiMTZmMjVlOWUzZDRkYmMxMjQ0OTYzMzI5ZWYWuVDw: ]] 00:19:42.148 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTNmNWFiMTZmMjVlOWUzZDRkYmMxMjQ0OTYzMzI5ZWYWuVDw: 00:19:42.148 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:19:42.148 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:42.148 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:42.148 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:42.148 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:42.148 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:42.148 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:42.148 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.148 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:42.148 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.148 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:42.148 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:42.148 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:42.148 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:42.148 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:42.148 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:42.148 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:42.148 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:42.148 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:42.148 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:42.148 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:42.148 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:42.148 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.148 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:42.407 nvme0n1 00:19:42.407 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.407 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:42.407 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.407 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:42.407 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:42.407 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.407 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:42.407 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:42.407 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.407 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:42.407 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.407 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:42.407 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:19:42.407 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:42.407 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:42.407 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:42.407 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:42.407 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmZiY2I4NmZkMzdiMWVmNjYwNmI5Y2E5NGI3ODJmYTdjNGM1MDk3OGZkZjNlMjhkMDUzOWRmZjlhY2M2NGRiOaHzdg8=: 00:19:42.407 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:42.407 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:42.407 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:42.407 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmZiY2I4NmZkMzdiMWVmNjYwNmI5Y2E5NGI3ODJmYTdjNGM1MDk3OGZkZjNlMjhkMDUzOWRmZjlhY2M2NGRiOaHzdg8=: 00:19:42.407 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:42.407 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:19:42.407 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:42.407 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:42.407 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:42.407 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:42.407 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:42.407 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:42.407 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.407 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:42.407 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.407 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:42.407 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:42.407 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:42.407 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:42.407 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:42.407 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:42.408 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:42.408 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:42.408 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:42.408 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:42.408 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:42.408 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:42.408 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.408 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:42.408 nvme0n1 00:19:42.408 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.408 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:42.408 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:42.408 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.408 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:42.408 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.667 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:42.667 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:42.667 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.667 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:42.667 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.667 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:42.667 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:42.667 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:19:42.667 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:42.667 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:42.667 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:42.667 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:42.667 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmY1ZGRkOGFlMDljNmMwYzYxYzVjYTU2ZjQxY2Y5OGS5qi5F: 00:19:42.667 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjE3MjI0OWRlOTM5YWFmY2U3MWFhNzcyNzU4ZWQ3MWJkYjcxYjE3MmJkM2Y1ZGZmOWIyYzM1NTEwYTU2NTVkZm9czos=: 00:19:42.667 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:42.667 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:42.667 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmY1ZGRkOGFlMDljNmMwYzYxYzVjYTU2ZjQxY2Y5OGS5qi5F: 00:19:42.667 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjE3MjI0OWRlOTM5YWFmY2U3MWFhNzcyNzU4ZWQ3MWJkYjcxYjE3MmJkM2Y1ZGZmOWIyYzM1NTEwYTU2NTVkZm9czos=: ]] 00:19:42.667 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjE3MjI0OWRlOTM5YWFmY2U3MWFhNzcyNzU4ZWQ3MWJkYjcxYjE3MmJkM2Y1ZGZmOWIyYzM1NTEwYTU2NTVkZm9czos=: 00:19:42.667 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:19:42.667 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:42.667 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:42.667 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:42.667 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:42.667 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:42.667 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:42.667 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.667 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:42.667 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.667 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:42.667 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:42.667 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:42.667 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:42.667 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:42.667 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:42.667 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:42.667 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:42.667 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:42.667 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:42.667 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:42.667 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:42.667 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.667 19:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:42.667 nvme0n1 00:19:42.667 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.667 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:42.667 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.667 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:42.667 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:42.667 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.667 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:42.668 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:42.668 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.668 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:42.668 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.668 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:42.668 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:19:42.668 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:42.668 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:42.668 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:42.668 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:42.668 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWQ1OTRkZTdjNWVlMjEwNGVkM2I2ODMwODkzMDE4ZjY5NTdhMjAxY2RkM2ExNzVlq9HaBQ==: 00:19:42.668 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTEzZThkYTdiMTlmNmVjYTk5ZTMzN2QzZDY2MzhmMzhjOTgwNGUxMTE3ZmY1MGQzTOdqdA==: 00:19:42.668 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:42.668 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:42.668 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWQ1OTRkZTdjNWVlMjEwNGVkM2I2ODMwODkzMDE4ZjY5NTdhMjAxY2RkM2ExNzVlq9HaBQ==: 00:19:42.668 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTEzZThkYTdiMTlmNmVjYTk5ZTMzN2QzZDY2MzhmMzhjOTgwNGUxMTE3ZmY1MGQzTOdqdA==: ]] 00:19:42.668 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTEzZThkYTdiMTlmNmVjYTk5ZTMzN2QzZDY2MzhmMzhjOTgwNGUxMTE3ZmY1MGQzTOdqdA==: 00:19:42.668 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:19:42.668 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:42.668 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:42.668 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:42.668 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:42.668 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:42.668 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:42.668 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.668 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:42.668 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.668 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:42.668 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:42.668 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:42.668 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:42.668 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:42.668 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:42.668 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:42.668 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:42.668 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:42.668 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:42.668 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:42.668 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:42.668 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.668 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:42.927 nvme0n1 00:19:42.927 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.927 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:42.927 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:42.927 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.927 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:42.927 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.927 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:42.927 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:42.927 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.927 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:42.927 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.927 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:42.927 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:19:42.927 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:42.927 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:42.927 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:42.927 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:42.927 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmQzMTUwOGZkM2U2YWJjZjViOTQ0YTczYjc2YmNlMDXkokTh: 00:19:42.927 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Yjc5NzA1MjVmNjIwZTMyYjYzYzIxZDJlNTBhYzhmZDOIToKK: 00:19:42.927 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:42.927 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:42.927 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmQzMTUwOGZkM2U2YWJjZjViOTQ0YTczYjc2YmNlMDXkokTh: 00:19:42.927 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Yjc5NzA1MjVmNjIwZTMyYjYzYzIxZDJlNTBhYzhmZDOIToKK: ]] 00:19:42.927 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Yjc5NzA1MjVmNjIwZTMyYjYzYzIxZDJlNTBhYzhmZDOIToKK: 00:19:42.927 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:19:42.928 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:42.928 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:42.928 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:42.928 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:42.928 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:42.928 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:42.928 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.928 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:42.928 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.928 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:42.928 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:42.928 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:42.928 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:42.928 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:42.928 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:42.928 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:42.928 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:42.928 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:42.928 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:42.928 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:42.928 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:42.928 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.928 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.186 nvme0n1 00:19:43.186 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.186 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:43.186 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:43.186 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.186 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.186 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.186 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:43.186 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:43.187 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.187 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.187 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.187 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:43.187 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:19:43.187 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:43.187 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:43.187 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:43.187 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:43.187 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWEzYzhlMjAyYjdhYWUyYzQ2ZGVhOGMwOWQ1ZDc5ZDFhMGIxOTVjNTdkZGNiNGI3ac7c6Q==: 00:19:43.187 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTNmNWFiMTZmMjVlOWUzZDRkYmMxMjQ0OTYzMzI5ZWYWuVDw: 00:19:43.187 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:43.187 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:43.187 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWEzYzhlMjAyYjdhYWUyYzQ2ZGVhOGMwOWQ1ZDc5ZDFhMGIxOTVjNTdkZGNiNGI3ac7c6Q==: 00:19:43.187 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTNmNWFiMTZmMjVlOWUzZDRkYmMxMjQ0OTYzMzI5ZWYWuVDw: ]] 00:19:43.187 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTNmNWFiMTZmMjVlOWUzZDRkYmMxMjQ0OTYzMzI5ZWYWuVDw: 00:19:43.187 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:19:43.187 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:43.187 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:43.187 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:43.187 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:43.187 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:43.187 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:43.187 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.187 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.187 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.187 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:43.187 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:43.187 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:43.187 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:43.187 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:43.187 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:43.187 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:43.187 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:43.187 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:43.187 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:43.187 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:43.187 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:43.187 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.187 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.446 nvme0n1 00:19:43.446 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.446 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:43.446 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:43.446 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.446 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.446 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.446 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:43.446 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:43.446 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.446 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.446 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.446 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:43.446 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:19:43.446 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:43.446 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:43.446 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:43.446 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:43.446 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmZiY2I4NmZkMzdiMWVmNjYwNmI5Y2E5NGI3ODJmYTdjNGM1MDk3OGZkZjNlMjhkMDUzOWRmZjlhY2M2NGRiOaHzdg8=: 00:19:43.446 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:43.446 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:43.446 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:43.446 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmZiY2I4NmZkMzdiMWVmNjYwNmI5Y2E5NGI3ODJmYTdjNGM1MDk3OGZkZjNlMjhkMDUzOWRmZjlhY2M2NGRiOaHzdg8=: 00:19:43.446 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:43.446 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:19:43.446 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:43.446 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:43.446 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:43.446 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:43.446 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:43.446 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:43.446 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.446 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.446 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.446 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:43.446 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:43.446 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:43.446 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:43.446 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:43.446 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:43.446 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:43.446 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:43.446 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:43.446 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:43.446 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:43.446 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:43.446 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.446 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.446 nvme0n1 00:19:43.446 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.446 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:43.446 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:43.446 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.446 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.446 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.446 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:43.446 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:43.446 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.446 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.705 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.705 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:43.705 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:43.705 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:19:43.705 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:43.705 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:43.705 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:43.705 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:43.706 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmY1ZGRkOGFlMDljNmMwYzYxYzVjYTU2ZjQxY2Y5OGS5qi5F: 00:19:43.706 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjE3MjI0OWRlOTM5YWFmY2U3MWFhNzcyNzU4ZWQ3MWJkYjcxYjE3MmJkM2Y1ZGZmOWIyYzM1NTEwYTU2NTVkZm9czos=: 00:19:43.706 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:43.706 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:43.706 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmY1ZGRkOGFlMDljNmMwYzYxYzVjYTU2ZjQxY2Y5OGS5qi5F: 00:19:43.706 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjE3MjI0OWRlOTM5YWFmY2U3MWFhNzcyNzU4ZWQ3MWJkYjcxYjE3MmJkM2Y1ZGZmOWIyYzM1NTEwYTU2NTVkZm9czos=: ]] 00:19:43.706 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjE3MjI0OWRlOTM5YWFmY2U3MWFhNzcyNzU4ZWQ3MWJkYjcxYjE3MmJkM2Y1ZGZmOWIyYzM1NTEwYTU2NTVkZm9czos=: 00:19:43.706 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:19:43.706 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:43.706 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:43.706 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:43.706 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:43.706 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:43.706 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:43.706 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.706 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.706 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.706 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:43.706 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:43.706 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:43.706 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:43.706 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:43.706 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:43.706 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:43.706 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:43.706 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:43.706 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:43.706 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:43.706 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:43.706 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.706 19:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.706 nvme0n1 00:19:43.706 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.706 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:43.706 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.706 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:43.706 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.706 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.965 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:43.965 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:43.965 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.965 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.965 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.965 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:43.965 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:19:43.966 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:43.966 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:43.966 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:43.966 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:43.966 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWQ1OTRkZTdjNWVlMjEwNGVkM2I2ODMwODkzMDE4ZjY5NTdhMjAxY2RkM2ExNzVlq9HaBQ==: 00:19:43.966 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTEzZThkYTdiMTlmNmVjYTk5ZTMzN2QzZDY2MzhmMzhjOTgwNGUxMTE3ZmY1MGQzTOdqdA==: 00:19:43.966 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:43.966 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:43.966 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWQ1OTRkZTdjNWVlMjEwNGVkM2I2ODMwODkzMDE4ZjY5NTdhMjAxY2RkM2ExNzVlq9HaBQ==: 00:19:43.966 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTEzZThkYTdiMTlmNmVjYTk5ZTMzN2QzZDY2MzhmMzhjOTgwNGUxMTE3ZmY1MGQzTOdqdA==: ]] 00:19:43.966 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTEzZThkYTdiMTlmNmVjYTk5ZTMzN2QzZDY2MzhmMzhjOTgwNGUxMTE3ZmY1MGQzTOdqdA==: 00:19:43.966 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:19:43.966 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:43.966 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:43.966 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:43.966 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:43.966 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:43.966 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:43.966 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.966 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.966 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.966 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:43.966 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:43.966 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:43.966 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:43.966 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:43.966 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:43.966 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:43.966 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:43.966 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:43.966 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:43.966 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:43.966 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:43.966 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.966 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.966 nvme0n1 00:19:43.966 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.966 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:43.966 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.966 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:43.966 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.966 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.226 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:44.226 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:44.226 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.226 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:44.226 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.226 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:44.226 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:19:44.226 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:44.226 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:44.226 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:44.226 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:44.226 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmQzMTUwOGZkM2U2YWJjZjViOTQ0YTczYjc2YmNlMDXkokTh: 00:19:44.226 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Yjc5NzA1MjVmNjIwZTMyYjYzYzIxZDJlNTBhYzhmZDOIToKK: 00:19:44.226 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:44.226 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:44.226 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmQzMTUwOGZkM2U2YWJjZjViOTQ0YTczYjc2YmNlMDXkokTh: 00:19:44.226 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Yjc5NzA1MjVmNjIwZTMyYjYzYzIxZDJlNTBhYzhmZDOIToKK: ]] 00:19:44.226 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Yjc5NzA1MjVmNjIwZTMyYjYzYzIxZDJlNTBhYzhmZDOIToKK: 00:19:44.226 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:19:44.226 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:44.226 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:44.226 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:44.226 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:44.226 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:44.226 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:44.226 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.226 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:44.226 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.226 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:44.226 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:44.226 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:44.226 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:44.226 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:44.226 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:44.226 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:44.226 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:44.226 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:44.226 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:44.226 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:44.226 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:44.226 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.226 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:44.226 nvme0n1 00:19:44.226 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.226 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:44.226 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:44.226 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.226 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:44.486 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.486 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:44.486 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:44.486 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.486 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:44.486 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.486 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:44.486 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:19:44.486 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:44.486 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:44.486 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:44.486 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:44.486 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWEzYzhlMjAyYjdhYWUyYzQ2ZGVhOGMwOWQ1ZDc5ZDFhMGIxOTVjNTdkZGNiNGI3ac7c6Q==: 00:19:44.486 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTNmNWFiMTZmMjVlOWUzZDRkYmMxMjQ0OTYzMzI5ZWYWuVDw: 00:19:44.486 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:44.486 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:44.486 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWEzYzhlMjAyYjdhYWUyYzQ2ZGVhOGMwOWQ1ZDc5ZDFhMGIxOTVjNTdkZGNiNGI3ac7c6Q==: 00:19:44.486 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTNmNWFiMTZmMjVlOWUzZDRkYmMxMjQ0OTYzMzI5ZWYWuVDw: ]] 00:19:44.486 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTNmNWFiMTZmMjVlOWUzZDRkYmMxMjQ0OTYzMzI5ZWYWuVDw: 00:19:44.486 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:19:44.486 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:44.486 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:44.486 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:44.486 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:44.486 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:44.486 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:44.486 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.486 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:44.486 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.486 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:44.486 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:44.486 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:44.486 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:44.486 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:44.486 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:44.486 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:44.486 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:44.486 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:44.486 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:44.486 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:44.486 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:44.486 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.486 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:44.486 nvme0n1 00:19:44.486 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.745 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:44.745 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:44.745 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.745 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:44.745 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.745 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:44.745 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:44.745 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.745 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:44.745 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.745 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:44.745 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:19:44.745 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:44.745 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:44.745 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:44.745 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:44.745 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmZiY2I4NmZkMzdiMWVmNjYwNmI5Y2E5NGI3ODJmYTdjNGM1MDk3OGZkZjNlMjhkMDUzOWRmZjlhY2M2NGRiOaHzdg8=: 00:19:44.745 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:44.745 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:44.745 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:44.745 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmZiY2I4NmZkMzdiMWVmNjYwNmI5Y2E5NGI3ODJmYTdjNGM1MDk3OGZkZjNlMjhkMDUzOWRmZjlhY2M2NGRiOaHzdg8=: 00:19:44.745 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:44.745 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:19:44.745 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:44.745 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:44.745 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:44.745 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:44.745 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:44.745 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:44.745 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.745 19:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:44.745 19:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.745 19:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:44.745 19:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:44.745 19:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:44.746 19:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:44.746 19:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:44.746 19:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:44.746 19:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:44.746 19:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:44.746 19:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:44.746 19:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:44.746 19:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:44.746 19:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:44.746 19:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.746 19:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:45.004 nvme0n1 00:19:45.004 19:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.004 19:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:45.004 19:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:45.004 19:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.004 19:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:45.004 19:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.004 19:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:45.004 19:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:45.004 19:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.004 19:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:45.004 19:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.004 19:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:45.004 19:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:45.004 19:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:19:45.004 19:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:45.004 19:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:45.004 19:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:45.004 19:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:45.004 19:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmY1ZGRkOGFlMDljNmMwYzYxYzVjYTU2ZjQxY2Y5OGS5qi5F: 00:19:45.005 19:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjE3MjI0OWRlOTM5YWFmY2U3MWFhNzcyNzU4ZWQ3MWJkYjcxYjE3MmJkM2Y1ZGZmOWIyYzM1NTEwYTU2NTVkZm9czos=: 00:19:45.005 19:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:45.005 19:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:45.005 19:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmY1ZGRkOGFlMDljNmMwYzYxYzVjYTU2ZjQxY2Y5OGS5qi5F: 00:19:45.005 19:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjE3MjI0OWRlOTM5YWFmY2U3MWFhNzcyNzU4ZWQ3MWJkYjcxYjE3MmJkM2Y1ZGZmOWIyYzM1NTEwYTU2NTVkZm9czos=: ]] 00:19:45.005 19:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjE3MjI0OWRlOTM5YWFmY2U3MWFhNzcyNzU4ZWQ3MWJkYjcxYjE3MmJkM2Y1ZGZmOWIyYzM1NTEwYTU2NTVkZm9czos=: 00:19:45.005 19:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:19:45.005 19:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:45.005 19:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:45.005 19:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:45.005 19:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:45.005 19:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:45.005 19:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:45.005 19:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.005 19:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:45.005 19:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.005 19:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:45.005 19:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:45.005 19:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:45.005 19:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:45.005 19:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:45.005 19:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:45.005 19:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:45.005 19:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:45.005 19:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:45.005 19:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:45.005 19:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:45.005 19:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:45.005 19:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.005 19:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:45.263 nvme0n1 00:19:45.263 19:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.263 19:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:45.263 19:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:45.263 19:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.263 19:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:45.263 19:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.263 19:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:45.263 19:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:45.263 19:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.263 19:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:45.523 19:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.523 19:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:45.523 19:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:19:45.523 19:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:45.523 19:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:45.523 19:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:45.523 19:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:45.523 19:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWQ1OTRkZTdjNWVlMjEwNGVkM2I2ODMwODkzMDE4ZjY5NTdhMjAxY2RkM2ExNzVlq9HaBQ==: 00:19:45.523 19:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTEzZThkYTdiMTlmNmVjYTk5ZTMzN2QzZDY2MzhmMzhjOTgwNGUxMTE3ZmY1MGQzTOdqdA==: 00:19:45.523 19:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:45.523 19:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:45.523 19:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWQ1OTRkZTdjNWVlMjEwNGVkM2I2ODMwODkzMDE4ZjY5NTdhMjAxY2RkM2ExNzVlq9HaBQ==: 00:19:45.523 19:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTEzZThkYTdiMTlmNmVjYTk5ZTMzN2QzZDY2MzhmMzhjOTgwNGUxMTE3ZmY1MGQzTOdqdA==: ]] 00:19:45.523 19:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTEzZThkYTdiMTlmNmVjYTk5ZTMzN2QzZDY2MzhmMzhjOTgwNGUxMTE3ZmY1MGQzTOdqdA==: 00:19:45.523 19:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:19:45.523 19:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:45.523 19:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:45.523 19:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:45.523 19:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:45.523 19:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:45.523 19:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:45.523 19:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.523 19:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:45.523 19:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.523 19:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:45.523 19:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:45.523 19:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:45.523 19:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:45.523 19:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:45.523 19:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:45.523 19:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:45.523 19:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:45.523 19:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:45.523 19:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:45.523 19:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:45.523 19:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:45.523 19:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.523 19:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:45.782 nvme0n1 00:19:45.782 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.782 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:45.782 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:45.782 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.782 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:45.782 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.782 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:45.782 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:45.782 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.782 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:45.782 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.782 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:45.782 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:19:45.782 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:45.782 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:45.782 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:45.782 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:45.782 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmQzMTUwOGZkM2U2YWJjZjViOTQ0YTczYjc2YmNlMDXkokTh: 00:19:45.782 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Yjc5NzA1MjVmNjIwZTMyYjYzYzIxZDJlNTBhYzhmZDOIToKK: 00:19:45.782 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:45.782 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:45.782 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmQzMTUwOGZkM2U2YWJjZjViOTQ0YTczYjc2YmNlMDXkokTh: 00:19:45.782 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Yjc5NzA1MjVmNjIwZTMyYjYzYzIxZDJlNTBhYzhmZDOIToKK: ]] 00:19:45.782 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Yjc5NzA1MjVmNjIwZTMyYjYzYzIxZDJlNTBhYzhmZDOIToKK: 00:19:45.782 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:19:45.782 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:45.782 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:45.782 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:45.782 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:45.782 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:45.782 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:45.782 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.782 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:45.782 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.782 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:45.782 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:45.782 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:45.782 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:45.782 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:45.782 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:45.782 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:45.782 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:45.782 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:45.782 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:45.782 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:45.782 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:45.782 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.782 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:46.350 nvme0n1 00:19:46.350 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.350 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:46.350 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:46.350 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.350 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:46.350 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.350 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:46.350 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:46.350 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.350 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:46.350 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.350 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:46.350 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:19:46.350 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:46.350 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:46.350 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:46.350 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:46.350 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWEzYzhlMjAyYjdhYWUyYzQ2ZGVhOGMwOWQ1ZDc5ZDFhMGIxOTVjNTdkZGNiNGI3ac7c6Q==: 00:19:46.350 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTNmNWFiMTZmMjVlOWUzZDRkYmMxMjQ0OTYzMzI5ZWYWuVDw: 00:19:46.350 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:46.350 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:46.350 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWEzYzhlMjAyYjdhYWUyYzQ2ZGVhOGMwOWQ1ZDc5ZDFhMGIxOTVjNTdkZGNiNGI3ac7c6Q==: 00:19:46.350 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTNmNWFiMTZmMjVlOWUzZDRkYmMxMjQ0OTYzMzI5ZWYWuVDw: ]] 00:19:46.350 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTNmNWFiMTZmMjVlOWUzZDRkYmMxMjQ0OTYzMzI5ZWYWuVDw: 00:19:46.350 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:19:46.350 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:46.350 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:46.350 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:46.350 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:46.350 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:46.350 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:46.350 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.350 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:46.350 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.350 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:46.350 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:46.350 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:46.350 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:46.350 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:46.350 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:46.350 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:46.350 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:46.350 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:46.350 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:46.350 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:46.350 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:46.350 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.350 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:46.609 nvme0n1 00:19:46.609 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.609 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:46.609 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.609 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:46.609 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:46.609 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.609 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:46.609 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:46.609 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.609 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:46.609 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.609 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:46.609 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:19:46.609 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:46.609 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:46.609 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:46.609 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:46.609 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmZiY2I4NmZkMzdiMWVmNjYwNmI5Y2E5NGI3ODJmYTdjNGM1MDk3OGZkZjNlMjhkMDUzOWRmZjlhY2M2NGRiOaHzdg8=: 00:19:46.609 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:46.609 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:46.609 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:46.609 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmZiY2I4NmZkMzdiMWVmNjYwNmI5Y2E5NGI3ODJmYTdjNGM1MDk3OGZkZjNlMjhkMDUzOWRmZjlhY2M2NGRiOaHzdg8=: 00:19:46.609 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:46.609 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:19:46.609 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:46.609 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:46.609 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:46.609 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:46.609 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:46.609 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:46.609 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.609 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:46.609 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.609 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:46.609 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:46.609 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:46.609 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:46.609 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:46.609 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:46.609 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:46.609 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:46.609 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:46.609 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:46.609 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:46.609 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:46.609 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.609 19:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:47.176 nvme0n1 00:19:47.176 19:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.176 19:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:47.176 19:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.176 19:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:47.176 19:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:47.176 19:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.176 19:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:47.176 19:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:47.176 19:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.176 19:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:47.176 19:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.176 19:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:47.176 19:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:47.176 19:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:19:47.176 19:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:47.176 19:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:47.176 19:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:47.176 19:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:47.176 19:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmY1ZGRkOGFlMDljNmMwYzYxYzVjYTU2ZjQxY2Y5OGS5qi5F: 00:19:47.176 19:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjE3MjI0OWRlOTM5YWFmY2U3MWFhNzcyNzU4ZWQ3MWJkYjcxYjE3MmJkM2Y1ZGZmOWIyYzM1NTEwYTU2NTVkZm9czos=: 00:19:47.176 19:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:47.176 19:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:47.176 19:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmY1ZGRkOGFlMDljNmMwYzYxYzVjYTU2ZjQxY2Y5OGS5qi5F: 00:19:47.176 19:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjE3MjI0OWRlOTM5YWFmY2U3MWFhNzcyNzU4ZWQ3MWJkYjcxYjE3MmJkM2Y1ZGZmOWIyYzM1NTEwYTU2NTVkZm9czos=: ]] 00:19:47.176 19:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjE3MjI0OWRlOTM5YWFmY2U3MWFhNzcyNzU4ZWQ3MWJkYjcxYjE3MmJkM2Y1ZGZmOWIyYzM1NTEwYTU2NTVkZm9czos=: 00:19:47.176 19:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:19:47.176 19:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:47.176 19:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:47.176 19:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:47.176 19:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:47.176 19:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:47.176 19:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:47.176 19:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.176 19:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:47.176 19:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.176 19:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:47.176 19:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:47.176 19:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:47.176 19:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:47.176 19:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:47.176 19:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:47.176 19:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:47.176 19:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:47.176 19:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:47.176 19:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:47.176 19:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:47.176 19:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:47.176 19:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.176 19:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:47.744 nvme0n1 00:19:47.744 19:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.744 19:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:47.744 19:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:47.744 19:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.744 19:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:47.744 19:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.744 19:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:47.744 19:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:47.744 19:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.744 19:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:47.744 19:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.744 19:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:47.744 19:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:19:47.744 19:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:47.744 19:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:47.744 19:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:47.744 19:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:47.744 19:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWQ1OTRkZTdjNWVlMjEwNGVkM2I2ODMwODkzMDE4ZjY5NTdhMjAxY2RkM2ExNzVlq9HaBQ==: 00:19:47.744 19:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTEzZThkYTdiMTlmNmVjYTk5ZTMzN2QzZDY2MzhmMzhjOTgwNGUxMTE3ZmY1MGQzTOdqdA==: 00:19:47.744 19:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:47.744 19:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:47.744 19:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWQ1OTRkZTdjNWVlMjEwNGVkM2I2ODMwODkzMDE4ZjY5NTdhMjAxY2RkM2ExNzVlq9HaBQ==: 00:19:47.744 19:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTEzZThkYTdiMTlmNmVjYTk5ZTMzN2QzZDY2MzhmMzhjOTgwNGUxMTE3ZmY1MGQzTOdqdA==: ]] 00:19:47.744 19:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTEzZThkYTdiMTlmNmVjYTk5ZTMzN2QzZDY2MzhmMzhjOTgwNGUxMTE3ZmY1MGQzTOdqdA==: 00:19:47.744 19:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:19:47.744 19:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:47.744 19:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:47.744 19:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:47.744 19:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:47.744 19:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:47.744 19:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:47.744 19:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.744 19:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:47.744 19:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.744 19:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:47.744 19:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:47.744 19:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:47.744 19:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:47.744 19:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:47.744 19:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:47.744 19:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:47.744 19:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:47.744 19:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:47.744 19:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:47.744 19:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:47.744 19:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:47.744 19:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.744 19:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:48.312 nvme0n1 00:19:48.312 19:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.312 19:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:48.312 19:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:48.312 19:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.312 19:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:48.312 19:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.312 19:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:48.312 19:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:48.312 19:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.312 19:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:48.571 19:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.571 19:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:48.571 19:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:19:48.571 19:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:48.571 19:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:48.571 19:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:48.571 19:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:48.571 19:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmQzMTUwOGZkM2U2YWJjZjViOTQ0YTczYjc2YmNlMDXkokTh: 00:19:48.571 19:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Yjc5NzA1MjVmNjIwZTMyYjYzYzIxZDJlNTBhYzhmZDOIToKK: 00:19:48.571 19:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:48.571 19:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:48.571 19:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmQzMTUwOGZkM2U2YWJjZjViOTQ0YTczYjc2YmNlMDXkokTh: 00:19:48.571 19:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Yjc5NzA1MjVmNjIwZTMyYjYzYzIxZDJlNTBhYzhmZDOIToKK: ]] 00:19:48.571 19:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Yjc5NzA1MjVmNjIwZTMyYjYzYzIxZDJlNTBhYzhmZDOIToKK: 00:19:48.571 19:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:19:48.571 19:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:48.571 19:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:48.571 19:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:48.571 19:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:48.571 19:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:48.571 19:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:48.571 19:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.571 19:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:48.571 19:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.571 19:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:48.571 19:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:48.571 19:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:48.571 19:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:48.571 19:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:48.571 19:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:48.571 19:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:48.571 19:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:48.571 19:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:48.571 19:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:48.571 19:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:48.571 19:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:48.571 19:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.571 19:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.139 nvme0n1 00:19:49.139 19:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.139 19:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:49.139 19:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:49.139 19:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.139 19:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.139 19:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.139 19:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:49.139 19:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:49.139 19:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.139 19:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.139 19:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.139 19:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:49.139 19:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:19:49.139 19:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:49.139 19:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:49.139 19:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:49.139 19:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:49.139 19:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWEzYzhlMjAyYjdhYWUyYzQ2ZGVhOGMwOWQ1ZDc5ZDFhMGIxOTVjNTdkZGNiNGI3ac7c6Q==: 00:19:49.139 19:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTNmNWFiMTZmMjVlOWUzZDRkYmMxMjQ0OTYzMzI5ZWYWuVDw: 00:19:49.139 19:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:49.139 19:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:49.139 19:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWEzYzhlMjAyYjdhYWUyYzQ2ZGVhOGMwOWQ1ZDc5ZDFhMGIxOTVjNTdkZGNiNGI3ac7c6Q==: 00:19:49.139 19:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTNmNWFiMTZmMjVlOWUzZDRkYmMxMjQ0OTYzMzI5ZWYWuVDw: ]] 00:19:49.139 19:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTNmNWFiMTZmMjVlOWUzZDRkYmMxMjQ0OTYzMzI5ZWYWuVDw: 00:19:49.139 19:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:19:49.139 19:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:49.139 19:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:49.139 19:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:49.139 19:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:49.139 19:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:49.139 19:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:49.139 19:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.140 19:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.140 19:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.140 19:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:49.140 19:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:49.140 19:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:49.140 19:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:49.140 19:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:49.140 19:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:49.140 19:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:49.140 19:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:49.140 19:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:49.140 19:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:49.140 19:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:49.140 19:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:49.140 19:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.140 19:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.708 nvme0n1 00:19:49.708 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.708 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:49.708 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:49.708 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.708 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.708 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.708 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:49.708 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:49.708 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.708 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.708 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.708 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:49.708 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:19:49.708 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:49.708 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:49.708 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:49.708 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:49.708 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmZiY2I4NmZkMzdiMWVmNjYwNmI5Y2E5NGI3ODJmYTdjNGM1MDk3OGZkZjNlMjhkMDUzOWRmZjlhY2M2NGRiOaHzdg8=: 00:19:49.708 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:49.708 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:49.708 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:49.708 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmZiY2I4NmZkMzdiMWVmNjYwNmI5Y2E5NGI3ODJmYTdjNGM1MDk3OGZkZjNlMjhkMDUzOWRmZjlhY2M2NGRiOaHzdg8=: 00:19:49.708 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:49.708 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:19:49.708 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:49.708 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:49.708 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:49.708 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:49.708 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:49.708 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:49.708 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.708 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.708 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.708 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:49.708 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:49.708 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:49.708 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:49.708 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:49.708 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:49.708 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:49.708 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:49.708 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:49.708 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:49.708 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:49.708 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:49.708 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.708 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.276 nvme0n1 00:19:50.276 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.276 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:50.276 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:50.276 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.276 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.276 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.534 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:50.534 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:50.534 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.534 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.534 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.534 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:19:50.534 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:50.534 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:50.534 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:19:50.534 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:50.534 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:50.534 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:50.534 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:50.534 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmY1ZGRkOGFlMDljNmMwYzYxYzVjYTU2ZjQxY2Y5OGS5qi5F: 00:19:50.534 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjE3MjI0OWRlOTM5YWFmY2U3MWFhNzcyNzU4ZWQ3MWJkYjcxYjE3MmJkM2Y1ZGZmOWIyYzM1NTEwYTU2NTVkZm9czos=: 00:19:50.534 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:50.534 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:50.534 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmY1ZGRkOGFlMDljNmMwYzYxYzVjYTU2ZjQxY2Y5OGS5qi5F: 00:19:50.534 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjE3MjI0OWRlOTM5YWFmY2U3MWFhNzcyNzU4ZWQ3MWJkYjcxYjE3MmJkM2Y1ZGZmOWIyYzM1NTEwYTU2NTVkZm9czos=: ]] 00:19:50.534 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjE3MjI0OWRlOTM5YWFmY2U3MWFhNzcyNzU4ZWQ3MWJkYjcxYjE3MmJkM2Y1ZGZmOWIyYzM1NTEwYTU2NTVkZm9czos=: 00:19:50.534 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:19:50.534 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:50.534 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:50.534 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:50.534 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:50.534 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:50.534 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:50.534 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.534 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.534 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.534 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:50.534 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:50.534 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:50.534 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:50.534 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:50.534 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:50.535 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:50.535 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:50.535 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:50.535 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:50.535 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:50.535 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:50.535 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.535 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.535 nvme0n1 00:19:50.535 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.535 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:50.535 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:50.535 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.535 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.535 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.535 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:50.535 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:50.535 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.535 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.535 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.535 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:50.535 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:19:50.535 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:50.535 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:50.535 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:50.535 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:50.535 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWQ1OTRkZTdjNWVlMjEwNGVkM2I2ODMwODkzMDE4ZjY5NTdhMjAxY2RkM2ExNzVlq9HaBQ==: 00:19:50.535 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTEzZThkYTdiMTlmNmVjYTk5ZTMzN2QzZDY2MzhmMzhjOTgwNGUxMTE3ZmY1MGQzTOdqdA==: 00:19:50.535 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:50.535 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:50.535 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWQ1OTRkZTdjNWVlMjEwNGVkM2I2ODMwODkzMDE4ZjY5NTdhMjAxY2RkM2ExNzVlq9HaBQ==: 00:19:50.535 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTEzZThkYTdiMTlmNmVjYTk5ZTMzN2QzZDY2MzhmMzhjOTgwNGUxMTE3ZmY1MGQzTOdqdA==: ]] 00:19:50.535 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTEzZThkYTdiMTlmNmVjYTk5ZTMzN2QzZDY2MzhmMzhjOTgwNGUxMTE3ZmY1MGQzTOdqdA==: 00:19:50.535 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:19:50.535 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:50.535 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:50.535 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:50.535 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:50.535 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:50.535 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:50.535 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.535 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.535 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.535 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:50.535 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:50.535 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:50.535 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:50.535 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:50.535 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:50.535 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:50.535 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:50.535 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:50.535 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:50.535 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:50.535 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:50.535 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.535 19:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.792 nvme0n1 00:19:50.792 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.792 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:50.792 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:50.792 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.792 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.792 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.792 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:50.792 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:50.792 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.792 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.792 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.792 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:50.792 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:19:50.792 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:50.792 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:50.792 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:50.792 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:50.793 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmQzMTUwOGZkM2U2YWJjZjViOTQ0YTczYjc2YmNlMDXkokTh: 00:19:50.793 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Yjc5NzA1MjVmNjIwZTMyYjYzYzIxZDJlNTBhYzhmZDOIToKK: 00:19:50.793 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:50.793 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:50.793 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmQzMTUwOGZkM2U2YWJjZjViOTQ0YTczYjc2YmNlMDXkokTh: 00:19:50.793 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Yjc5NzA1MjVmNjIwZTMyYjYzYzIxZDJlNTBhYzhmZDOIToKK: ]] 00:19:50.793 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Yjc5NzA1MjVmNjIwZTMyYjYzYzIxZDJlNTBhYzhmZDOIToKK: 00:19:50.793 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:19:50.793 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:50.793 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:50.793 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:50.793 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:50.793 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:50.793 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:50.793 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.793 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.793 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.793 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:50.793 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:50.793 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:50.793 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:50.793 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:50.793 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:50.793 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:50.793 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:50.793 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:50.793 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:50.793 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:50.793 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:50.793 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.793 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.793 nvme0n1 00:19:50.793 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.793 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:50.793 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.793 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.793 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:51.052 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.052 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:51.052 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:51.052 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.052 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:51.052 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.052 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:51.052 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:19:51.052 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:51.052 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:51.052 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:51.052 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:51.052 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWEzYzhlMjAyYjdhYWUyYzQ2ZGVhOGMwOWQ1ZDc5ZDFhMGIxOTVjNTdkZGNiNGI3ac7c6Q==: 00:19:51.052 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTNmNWFiMTZmMjVlOWUzZDRkYmMxMjQ0OTYzMzI5ZWYWuVDw: 00:19:51.052 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:51.052 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:51.052 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWEzYzhlMjAyYjdhYWUyYzQ2ZGVhOGMwOWQ1ZDc5ZDFhMGIxOTVjNTdkZGNiNGI3ac7c6Q==: 00:19:51.052 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTNmNWFiMTZmMjVlOWUzZDRkYmMxMjQ0OTYzMzI5ZWYWuVDw: ]] 00:19:51.052 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTNmNWFiMTZmMjVlOWUzZDRkYmMxMjQ0OTYzMzI5ZWYWuVDw: 00:19:51.052 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:19:51.052 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:51.052 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:51.052 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:51.052 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:51.052 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:51.052 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:51.052 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.052 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:51.052 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.052 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:51.052 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:51.052 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:51.052 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:51.052 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:51.052 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:51.052 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:51.052 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:51.052 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:51.052 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:51.052 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:51.052 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:51.052 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.052 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:51.052 nvme0n1 00:19:51.052 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.052 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:51.052 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.052 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:51.052 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:51.052 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.052 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:51.052 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:51.052 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.052 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:51.052 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.052 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:51.052 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:19:51.052 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:51.052 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:51.052 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:51.052 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:51.052 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmZiY2I4NmZkMzdiMWVmNjYwNmI5Y2E5NGI3ODJmYTdjNGM1MDk3OGZkZjNlMjhkMDUzOWRmZjlhY2M2NGRiOaHzdg8=: 00:19:51.052 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:51.052 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:51.052 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:51.052 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmZiY2I4NmZkMzdiMWVmNjYwNmI5Y2E5NGI3ODJmYTdjNGM1MDk3OGZkZjNlMjhkMDUzOWRmZjlhY2M2NGRiOaHzdg8=: 00:19:51.052 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:51.052 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:19:51.052 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:51.052 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:51.052 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:51.052 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:51.052 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:51.052 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:51.052 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.052 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:51.052 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.052 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:51.052 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:51.052 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:51.052 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:51.052 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:51.052 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:51.052 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:51.052 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:51.052 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:51.053 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:51.053 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:51.053 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:51.053 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.053 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:51.311 nvme0n1 00:19:51.311 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.311 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:51.311 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:51.311 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.311 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:51.311 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.311 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:51.311 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:51.311 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.311 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:51.311 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.311 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:51.311 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:51.311 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:19:51.311 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:51.311 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:51.311 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:51.311 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:51.311 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmY1ZGRkOGFlMDljNmMwYzYxYzVjYTU2ZjQxY2Y5OGS5qi5F: 00:19:51.311 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjE3MjI0OWRlOTM5YWFmY2U3MWFhNzcyNzU4ZWQ3MWJkYjcxYjE3MmJkM2Y1ZGZmOWIyYzM1NTEwYTU2NTVkZm9czos=: 00:19:51.311 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:51.311 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:51.311 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmY1ZGRkOGFlMDljNmMwYzYxYzVjYTU2ZjQxY2Y5OGS5qi5F: 00:19:51.311 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjE3MjI0OWRlOTM5YWFmY2U3MWFhNzcyNzU4ZWQ3MWJkYjcxYjE3MmJkM2Y1ZGZmOWIyYzM1NTEwYTU2NTVkZm9czos=: ]] 00:19:51.311 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjE3MjI0OWRlOTM5YWFmY2U3MWFhNzcyNzU4ZWQ3MWJkYjcxYjE3MmJkM2Y1ZGZmOWIyYzM1NTEwYTU2NTVkZm9czos=: 00:19:51.311 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:19:51.311 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:51.311 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:51.311 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:51.311 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:51.311 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:51.312 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:51.312 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.312 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:51.312 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.312 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:51.312 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:51.312 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:51.312 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:51.312 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:51.312 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:51.312 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:51.312 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:51.312 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:51.312 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:51.312 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:51.312 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:51.312 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.312 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:51.570 nvme0n1 00:19:51.570 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.570 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:51.570 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.570 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:51.570 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:51.570 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.570 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:51.570 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:51.570 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.571 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:51.571 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.571 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:51.571 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:19:51.571 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:51.571 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:51.571 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:51.571 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:51.571 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWQ1OTRkZTdjNWVlMjEwNGVkM2I2ODMwODkzMDE4ZjY5NTdhMjAxY2RkM2ExNzVlq9HaBQ==: 00:19:51.571 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTEzZThkYTdiMTlmNmVjYTk5ZTMzN2QzZDY2MzhmMzhjOTgwNGUxMTE3ZmY1MGQzTOdqdA==: 00:19:51.571 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:51.571 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:51.571 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWQ1OTRkZTdjNWVlMjEwNGVkM2I2ODMwODkzMDE4ZjY5NTdhMjAxY2RkM2ExNzVlq9HaBQ==: 00:19:51.571 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTEzZThkYTdiMTlmNmVjYTk5ZTMzN2QzZDY2MzhmMzhjOTgwNGUxMTE3ZmY1MGQzTOdqdA==: ]] 00:19:51.571 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTEzZThkYTdiMTlmNmVjYTk5ZTMzN2QzZDY2MzhmMzhjOTgwNGUxMTE3ZmY1MGQzTOdqdA==: 00:19:51.571 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:19:51.571 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:51.571 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:51.571 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:51.571 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:51.571 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:51.571 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:51.571 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.571 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:51.571 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.571 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:51.571 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:51.571 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:51.571 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:51.571 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:51.571 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:51.571 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:51.571 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:51.571 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:51.571 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:51.571 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:51.571 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:51.571 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.571 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:51.571 nvme0n1 00:19:51.571 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.571 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:51.571 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:51.571 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.571 19:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:51.830 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.830 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:51.830 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:51.830 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.830 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:51.830 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.830 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:51.830 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:19:51.830 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:51.830 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:51.830 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:51.830 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:51.830 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmQzMTUwOGZkM2U2YWJjZjViOTQ0YTczYjc2YmNlMDXkokTh: 00:19:51.830 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Yjc5NzA1MjVmNjIwZTMyYjYzYzIxZDJlNTBhYzhmZDOIToKK: 00:19:51.830 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:51.830 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:51.830 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmQzMTUwOGZkM2U2YWJjZjViOTQ0YTczYjc2YmNlMDXkokTh: 00:19:51.830 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Yjc5NzA1MjVmNjIwZTMyYjYzYzIxZDJlNTBhYzhmZDOIToKK: ]] 00:19:51.830 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Yjc5NzA1MjVmNjIwZTMyYjYzYzIxZDJlNTBhYzhmZDOIToKK: 00:19:51.830 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:19:51.830 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:51.830 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:51.830 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:51.830 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:51.830 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:51.830 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:51.830 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.830 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:51.830 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.830 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:51.830 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:51.830 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:51.830 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:51.830 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:51.830 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:51.830 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:51.830 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:51.830 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:51.830 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:51.830 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:51.830 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:51.830 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.830 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:51.830 nvme0n1 00:19:51.830 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.830 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:51.830 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:51.830 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.830 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:51.830 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.830 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:51.830 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:51.830 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.830 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:51.830 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.830 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:51.830 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:19:51.830 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:51.830 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:51.830 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:51.830 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:51.830 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWEzYzhlMjAyYjdhYWUyYzQ2ZGVhOGMwOWQ1ZDc5ZDFhMGIxOTVjNTdkZGNiNGI3ac7c6Q==: 00:19:51.830 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTNmNWFiMTZmMjVlOWUzZDRkYmMxMjQ0OTYzMzI5ZWYWuVDw: 00:19:51.830 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:51.830 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:52.089 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWEzYzhlMjAyYjdhYWUyYzQ2ZGVhOGMwOWQ1ZDc5ZDFhMGIxOTVjNTdkZGNiNGI3ac7c6Q==: 00:19:52.089 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTNmNWFiMTZmMjVlOWUzZDRkYmMxMjQ0OTYzMzI5ZWYWuVDw: ]] 00:19:52.089 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTNmNWFiMTZmMjVlOWUzZDRkYmMxMjQ0OTYzMzI5ZWYWuVDw: 00:19:52.089 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:19:52.089 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:52.089 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:52.089 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:52.089 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:52.089 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:52.089 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:52.089 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.089 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.089 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.089 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:52.089 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:52.089 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:52.089 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:52.089 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:52.089 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:52.089 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:52.089 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:52.089 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:52.089 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:52.090 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:52.090 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:52.090 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.090 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.090 nvme0n1 00:19:52.090 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.090 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:52.090 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:52.090 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.090 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.090 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.090 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:52.090 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:52.090 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.090 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.090 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.090 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:52.090 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:19:52.090 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:52.090 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:52.090 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:52.090 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:52.090 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmZiY2I4NmZkMzdiMWVmNjYwNmI5Y2E5NGI3ODJmYTdjNGM1MDk3OGZkZjNlMjhkMDUzOWRmZjlhY2M2NGRiOaHzdg8=: 00:19:52.090 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:52.090 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:52.090 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:52.090 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmZiY2I4NmZkMzdiMWVmNjYwNmI5Y2E5NGI3ODJmYTdjNGM1MDk3OGZkZjNlMjhkMDUzOWRmZjlhY2M2NGRiOaHzdg8=: 00:19:52.090 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:52.090 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:19:52.090 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:52.090 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:52.090 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:52.090 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:52.090 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:52.090 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:52.090 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.090 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.090 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.090 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:52.090 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:52.090 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:52.090 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:52.090 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:52.090 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:52.090 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:52.090 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:52.090 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:52.090 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:52.090 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:52.090 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:52.090 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.090 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.360 nvme0n1 00:19:52.360 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.360 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:52.360 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.360 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:52.360 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.360 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.360 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:52.360 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:52.360 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.360 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.360 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.360 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:52.360 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:52.360 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:19:52.360 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:52.360 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:52.360 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:52.360 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:52.360 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmY1ZGRkOGFlMDljNmMwYzYxYzVjYTU2ZjQxY2Y5OGS5qi5F: 00:19:52.360 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjE3MjI0OWRlOTM5YWFmY2U3MWFhNzcyNzU4ZWQ3MWJkYjcxYjE3MmJkM2Y1ZGZmOWIyYzM1NTEwYTU2NTVkZm9czos=: 00:19:52.360 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:52.360 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:52.360 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmY1ZGRkOGFlMDljNmMwYzYxYzVjYTU2ZjQxY2Y5OGS5qi5F: 00:19:52.360 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjE3MjI0OWRlOTM5YWFmY2U3MWFhNzcyNzU4ZWQ3MWJkYjcxYjE3MmJkM2Y1ZGZmOWIyYzM1NTEwYTU2NTVkZm9czos=: ]] 00:19:52.360 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjE3MjI0OWRlOTM5YWFmY2U3MWFhNzcyNzU4ZWQ3MWJkYjcxYjE3MmJkM2Y1ZGZmOWIyYzM1NTEwYTU2NTVkZm9czos=: 00:19:52.360 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:19:52.360 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:52.361 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:52.361 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:52.361 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:52.361 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:52.361 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:52.361 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.361 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.361 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.361 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:52.361 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:52.361 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:52.361 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:52.361 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:52.361 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:52.361 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:52.361 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:52.361 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:52.361 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:52.361 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:52.361 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:52.361 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.361 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.647 nvme0n1 00:19:52.647 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.647 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:52.647 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:52.647 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.647 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.647 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.647 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:52.647 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:52.647 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.647 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.647 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.647 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:52.647 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:19:52.647 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:52.647 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:52.647 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:52.647 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:52.647 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWQ1OTRkZTdjNWVlMjEwNGVkM2I2ODMwODkzMDE4ZjY5NTdhMjAxY2RkM2ExNzVlq9HaBQ==: 00:19:52.647 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTEzZThkYTdiMTlmNmVjYTk5ZTMzN2QzZDY2MzhmMzhjOTgwNGUxMTE3ZmY1MGQzTOdqdA==: 00:19:52.647 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:52.647 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:52.647 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWQ1OTRkZTdjNWVlMjEwNGVkM2I2ODMwODkzMDE4ZjY5NTdhMjAxY2RkM2ExNzVlq9HaBQ==: 00:19:52.647 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTEzZThkYTdiMTlmNmVjYTk5ZTMzN2QzZDY2MzhmMzhjOTgwNGUxMTE3ZmY1MGQzTOdqdA==: ]] 00:19:52.647 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTEzZThkYTdiMTlmNmVjYTk5ZTMzN2QzZDY2MzhmMzhjOTgwNGUxMTE3ZmY1MGQzTOdqdA==: 00:19:52.647 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:19:52.647 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:52.647 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:52.647 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:52.647 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:52.647 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:52.647 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:52.647 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.647 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.647 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.647 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:52.647 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:52.647 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:52.647 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:52.647 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:52.647 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:52.647 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:52.647 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:52.647 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:52.647 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:52.647 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:52.647 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:52.647 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.647 19:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.905 nvme0n1 00:19:52.905 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.905 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:52.905 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.905 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:52.905 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.905 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.905 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:52.905 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:52.905 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.905 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.905 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.905 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:52.905 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:19:52.905 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:52.905 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:52.905 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:52.905 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:52.905 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmQzMTUwOGZkM2U2YWJjZjViOTQ0YTczYjc2YmNlMDXkokTh: 00:19:52.905 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Yjc5NzA1MjVmNjIwZTMyYjYzYzIxZDJlNTBhYzhmZDOIToKK: 00:19:52.905 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:52.905 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:52.905 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmQzMTUwOGZkM2U2YWJjZjViOTQ0YTczYjc2YmNlMDXkokTh: 00:19:52.905 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Yjc5NzA1MjVmNjIwZTMyYjYzYzIxZDJlNTBhYzhmZDOIToKK: ]] 00:19:52.905 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Yjc5NzA1MjVmNjIwZTMyYjYzYzIxZDJlNTBhYzhmZDOIToKK: 00:19:52.905 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:19:52.905 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:52.905 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:52.905 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:52.905 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:52.905 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:52.905 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:52.905 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.905 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.905 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.905 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:52.905 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:52.905 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:52.905 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:52.905 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:52.905 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:52.905 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:52.905 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:52.905 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:52.905 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:52.905 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:52.906 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:52.906 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.906 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.164 nvme0n1 00:19:53.164 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.164 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:53.164 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.164 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:53.164 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.164 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.164 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.164 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:53.164 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.164 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.164 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.164 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:53.164 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:19:53.164 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:53.164 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:53.164 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:53.164 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:53.164 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWEzYzhlMjAyYjdhYWUyYzQ2ZGVhOGMwOWQ1ZDc5ZDFhMGIxOTVjNTdkZGNiNGI3ac7c6Q==: 00:19:53.164 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTNmNWFiMTZmMjVlOWUzZDRkYmMxMjQ0OTYzMzI5ZWYWuVDw: 00:19:53.164 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:53.164 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:53.164 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWEzYzhlMjAyYjdhYWUyYzQ2ZGVhOGMwOWQ1ZDc5ZDFhMGIxOTVjNTdkZGNiNGI3ac7c6Q==: 00:19:53.164 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTNmNWFiMTZmMjVlOWUzZDRkYmMxMjQ0OTYzMzI5ZWYWuVDw: ]] 00:19:53.164 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTNmNWFiMTZmMjVlOWUzZDRkYmMxMjQ0OTYzMzI5ZWYWuVDw: 00:19:53.164 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:19:53.164 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:53.164 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:53.164 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:53.164 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:53.164 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:53.164 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:53.164 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.164 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.164 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.164 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:53.164 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:53.164 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:53.164 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:53.164 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:53.164 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:53.164 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:53.164 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:53.164 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:53.164 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:53.164 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:53.164 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:53.164 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.164 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.422 nvme0n1 00:19:53.422 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.422 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:53.422 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:53.422 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.422 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.422 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.422 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.422 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:53.422 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.422 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.422 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.422 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:53.422 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:19:53.422 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:53.422 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:53.422 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:53.422 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:53.422 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmZiY2I4NmZkMzdiMWVmNjYwNmI5Y2E5NGI3ODJmYTdjNGM1MDk3OGZkZjNlMjhkMDUzOWRmZjlhY2M2NGRiOaHzdg8=: 00:19:53.422 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:53.422 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:53.422 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:53.422 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmZiY2I4NmZkMzdiMWVmNjYwNmI5Y2E5NGI3ODJmYTdjNGM1MDk3OGZkZjNlMjhkMDUzOWRmZjlhY2M2NGRiOaHzdg8=: 00:19:53.422 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:53.422 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:19:53.422 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:53.422 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:53.422 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:53.422 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:53.422 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:53.422 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:53.422 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.422 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.422 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.422 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:53.422 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:53.422 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:53.422 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:53.422 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:53.422 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:53.422 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:53.422 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:53.422 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:53.422 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:53.422 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:53.422 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:53.422 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.422 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.680 nvme0n1 00:19:53.680 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.680 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:53.680 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:53.680 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.680 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.680 19:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.680 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.680 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:53.680 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.680 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.680 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.680 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:53.680 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:53.680 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:19:53.680 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:53.680 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:53.680 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:53.680 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:53.680 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmY1ZGRkOGFlMDljNmMwYzYxYzVjYTU2ZjQxY2Y5OGS5qi5F: 00:19:53.680 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjE3MjI0OWRlOTM5YWFmY2U3MWFhNzcyNzU4ZWQ3MWJkYjcxYjE3MmJkM2Y1ZGZmOWIyYzM1NTEwYTU2NTVkZm9czos=: 00:19:53.680 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:53.680 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:53.680 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmY1ZGRkOGFlMDljNmMwYzYxYzVjYTU2ZjQxY2Y5OGS5qi5F: 00:19:53.680 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjE3MjI0OWRlOTM5YWFmY2U3MWFhNzcyNzU4ZWQ3MWJkYjcxYjE3MmJkM2Y1ZGZmOWIyYzM1NTEwYTU2NTVkZm9czos=: ]] 00:19:53.680 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjE3MjI0OWRlOTM5YWFmY2U3MWFhNzcyNzU4ZWQ3MWJkYjcxYjE3MmJkM2Y1ZGZmOWIyYzM1NTEwYTU2NTVkZm9czos=: 00:19:53.680 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:19:53.680 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:53.680 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:53.680 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:53.680 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:53.680 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:53.680 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:53.680 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.680 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.680 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.680 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:53.680 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:53.680 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:53.680 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:53.680 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:53.680 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:53.680 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:53.680 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:53.680 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:53.680 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:53.680 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:53.680 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:53.680 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.680 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.267 nvme0n1 00:19:54.267 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.267 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:54.267 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:54.267 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.267 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.267 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.267 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.267 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:54.267 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.267 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.267 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.267 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:54.267 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:19:54.267 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:54.267 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:54.267 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:54.267 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:54.267 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWQ1OTRkZTdjNWVlMjEwNGVkM2I2ODMwODkzMDE4ZjY5NTdhMjAxY2RkM2ExNzVlq9HaBQ==: 00:19:54.267 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTEzZThkYTdiMTlmNmVjYTk5ZTMzN2QzZDY2MzhmMzhjOTgwNGUxMTE3ZmY1MGQzTOdqdA==: 00:19:54.267 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:54.267 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:54.267 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWQ1OTRkZTdjNWVlMjEwNGVkM2I2ODMwODkzMDE4ZjY5NTdhMjAxY2RkM2ExNzVlq9HaBQ==: 00:19:54.267 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTEzZThkYTdiMTlmNmVjYTk5ZTMzN2QzZDY2MzhmMzhjOTgwNGUxMTE3ZmY1MGQzTOdqdA==: ]] 00:19:54.267 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTEzZThkYTdiMTlmNmVjYTk5ZTMzN2QzZDY2MzhmMzhjOTgwNGUxMTE3ZmY1MGQzTOdqdA==: 00:19:54.267 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:19:54.267 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:54.267 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:54.267 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:54.267 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:54.267 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:54.267 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:54.267 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.267 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.267 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.267 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:54.267 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:54.267 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:54.268 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:54.268 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:54.268 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:54.268 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:54.268 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:54.268 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:54.268 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:54.268 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:54.268 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:54.268 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.268 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.527 nvme0n1 00:19:54.527 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.527 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:54.527 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.527 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:54.527 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.527 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.527 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.527 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:54.527 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.527 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.527 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.527 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:54.527 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:19:54.527 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:54.527 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:54.527 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:54.527 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:54.527 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmQzMTUwOGZkM2U2YWJjZjViOTQ0YTczYjc2YmNlMDXkokTh: 00:19:54.527 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Yjc5NzA1MjVmNjIwZTMyYjYzYzIxZDJlNTBhYzhmZDOIToKK: 00:19:54.527 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:54.527 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:54.527 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmQzMTUwOGZkM2U2YWJjZjViOTQ0YTczYjc2YmNlMDXkokTh: 00:19:54.527 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Yjc5NzA1MjVmNjIwZTMyYjYzYzIxZDJlNTBhYzhmZDOIToKK: ]] 00:19:54.527 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Yjc5NzA1MjVmNjIwZTMyYjYzYzIxZDJlNTBhYzhmZDOIToKK: 00:19:54.527 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:19:54.527 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:54.527 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:54.527 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:54.527 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:54.527 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:54.527 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:54.527 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.527 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.527 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.527 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:54.527 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:54.527 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:54.527 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:54.527 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:54.527 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:54.527 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:54.527 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:54.527 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:54.527 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:54.527 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:54.527 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:54.527 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.528 19:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.094 nvme0n1 00:19:55.094 19:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.094 19:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:55.094 19:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:55.094 19:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.094 19:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.094 19:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.094 19:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:55.094 19:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:55.094 19:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.094 19:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.094 19:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.094 19:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:55.094 19:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:19:55.094 19:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:55.094 19:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:55.094 19:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:55.094 19:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:55.094 19:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWEzYzhlMjAyYjdhYWUyYzQ2ZGVhOGMwOWQ1ZDc5ZDFhMGIxOTVjNTdkZGNiNGI3ac7c6Q==: 00:19:55.094 19:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTNmNWFiMTZmMjVlOWUzZDRkYmMxMjQ0OTYzMzI5ZWYWuVDw: 00:19:55.094 19:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:55.094 19:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:55.094 19:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWEzYzhlMjAyYjdhYWUyYzQ2ZGVhOGMwOWQ1ZDc5ZDFhMGIxOTVjNTdkZGNiNGI3ac7c6Q==: 00:19:55.094 19:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTNmNWFiMTZmMjVlOWUzZDRkYmMxMjQ0OTYzMzI5ZWYWuVDw: ]] 00:19:55.094 19:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTNmNWFiMTZmMjVlOWUzZDRkYmMxMjQ0OTYzMzI5ZWYWuVDw: 00:19:55.094 19:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:19:55.094 19:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:55.094 19:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:55.094 19:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:55.094 19:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:55.094 19:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:55.094 19:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:55.095 19:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.095 19:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.095 19:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.095 19:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:55.095 19:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:55.095 19:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:55.095 19:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:55.095 19:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:55.095 19:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:55.095 19:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:55.095 19:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:55.095 19:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:55.095 19:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:55.095 19:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:55.095 19:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:55.095 19:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.095 19:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.352 nvme0n1 00:19:55.353 19:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.353 19:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:55.353 19:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:55.353 19:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.353 19:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.353 19:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.611 19:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:55.611 19:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:55.611 19:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.611 19:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.611 19:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.611 19:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:55.611 19:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:19:55.611 19:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:55.611 19:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:55.611 19:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:55.611 19:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:55.611 19:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmZiY2I4NmZkMzdiMWVmNjYwNmI5Y2E5NGI3ODJmYTdjNGM1MDk3OGZkZjNlMjhkMDUzOWRmZjlhY2M2NGRiOaHzdg8=: 00:19:55.611 19:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:55.611 19:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:55.611 19:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:55.611 19:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmZiY2I4NmZkMzdiMWVmNjYwNmI5Y2E5NGI3ODJmYTdjNGM1MDk3OGZkZjNlMjhkMDUzOWRmZjlhY2M2NGRiOaHzdg8=: 00:19:55.611 19:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:55.611 19:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:19:55.611 19:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:55.611 19:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:55.611 19:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:55.611 19:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:55.611 19:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:55.611 19:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:55.611 19:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.611 19:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.611 19:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.611 19:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:55.611 19:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:55.611 19:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:55.611 19:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:55.611 19:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:55.611 19:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:55.611 19:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:55.611 19:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:55.611 19:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:55.611 19:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:55.611 19:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:55.611 19:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:55.611 19:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.611 19:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.869 nvme0n1 00:19:55.869 19:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.869 19:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:55.869 19:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:55.869 19:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.869 19:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.869 19:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.869 19:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:55.869 19:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:55.869 19:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.869 19:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.869 19:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.869 19:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:55.869 19:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:55.869 19:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:19:55.869 19:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:55.869 19:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:55.869 19:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:55.869 19:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:55.869 19:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmY1ZGRkOGFlMDljNmMwYzYxYzVjYTU2ZjQxY2Y5OGS5qi5F: 00:19:55.869 19:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjE3MjI0OWRlOTM5YWFmY2U3MWFhNzcyNzU4ZWQ3MWJkYjcxYjE3MmJkM2Y1ZGZmOWIyYzM1NTEwYTU2NTVkZm9czos=: 00:19:55.869 19:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:55.869 19:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:55.869 19:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmY1ZGRkOGFlMDljNmMwYzYxYzVjYTU2ZjQxY2Y5OGS5qi5F: 00:19:55.869 19:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjE3MjI0OWRlOTM5YWFmY2U3MWFhNzcyNzU4ZWQ3MWJkYjcxYjE3MmJkM2Y1ZGZmOWIyYzM1NTEwYTU2NTVkZm9czos=: ]] 00:19:55.869 19:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjE3MjI0OWRlOTM5YWFmY2U3MWFhNzcyNzU4ZWQ3MWJkYjcxYjE3MmJkM2Y1ZGZmOWIyYzM1NTEwYTU2NTVkZm9czos=: 00:19:55.869 19:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:19:55.869 19:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:55.869 19:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:55.869 19:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:55.869 19:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:55.869 19:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:55.869 19:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:55.869 19:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.869 19:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.869 19:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.869 19:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:55.869 19:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:55.869 19:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:55.869 19:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:55.869 19:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:55.869 19:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:55.870 19:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:55.870 19:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:55.870 19:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:55.870 19:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:55.870 19:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:55.870 19:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:55.870 19:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.870 19:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.805 nvme0n1 00:19:56.805 19:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.805 19:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:56.805 19:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:56.805 19:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.805 19:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.805 19:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.805 19:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.805 19:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:56.805 19:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.805 19:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.805 19:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.805 19:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:56.805 19:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:19:56.805 19:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:56.805 19:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:56.805 19:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:56.805 19:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:56.805 19:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWQ1OTRkZTdjNWVlMjEwNGVkM2I2ODMwODkzMDE4ZjY5NTdhMjAxY2RkM2ExNzVlq9HaBQ==: 00:19:56.805 19:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTEzZThkYTdiMTlmNmVjYTk5ZTMzN2QzZDY2MzhmMzhjOTgwNGUxMTE3ZmY1MGQzTOdqdA==: 00:19:56.805 19:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:56.805 19:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:56.805 19:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWQ1OTRkZTdjNWVlMjEwNGVkM2I2ODMwODkzMDE4ZjY5NTdhMjAxY2RkM2ExNzVlq9HaBQ==: 00:19:56.805 19:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTEzZThkYTdiMTlmNmVjYTk5ZTMzN2QzZDY2MzhmMzhjOTgwNGUxMTE3ZmY1MGQzTOdqdA==: ]] 00:19:56.805 19:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTEzZThkYTdiMTlmNmVjYTk5ZTMzN2QzZDY2MzhmMzhjOTgwNGUxMTE3ZmY1MGQzTOdqdA==: 00:19:56.805 19:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:19:56.805 19:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:56.805 19:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:56.805 19:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:56.805 19:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:56.805 19:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:56.805 19:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:56.805 19:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.805 19:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.805 19:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.805 19:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:56.805 19:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:56.805 19:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:56.805 19:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:56.805 19:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:56.805 19:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:56.805 19:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:56.805 19:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:56.805 19:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:56.805 19:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:56.805 19:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:56.805 19:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:56.805 19:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.805 19:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.373 nvme0n1 00:19:57.373 19:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.373 19:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:57.373 19:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.373 19:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.373 19:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:57.373 19:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.373 19:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:57.373 19:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:57.373 19:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.373 19:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.373 19:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.373 19:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:57.373 19:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:19:57.373 19:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:57.373 19:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:57.373 19:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:57.373 19:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:57.373 19:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmQzMTUwOGZkM2U2YWJjZjViOTQ0YTczYjc2YmNlMDXkokTh: 00:19:57.373 19:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Yjc5NzA1MjVmNjIwZTMyYjYzYzIxZDJlNTBhYzhmZDOIToKK: 00:19:57.373 19:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:57.373 19:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:57.373 19:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmQzMTUwOGZkM2U2YWJjZjViOTQ0YTczYjc2YmNlMDXkokTh: 00:19:57.373 19:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Yjc5NzA1MjVmNjIwZTMyYjYzYzIxZDJlNTBhYzhmZDOIToKK: ]] 00:19:57.373 19:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Yjc5NzA1MjVmNjIwZTMyYjYzYzIxZDJlNTBhYzhmZDOIToKK: 00:19:57.373 19:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:19:57.373 19:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:57.373 19:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:57.373 19:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:57.373 19:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:57.373 19:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:57.373 19:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:57.373 19:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.373 19:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.373 19:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.373 19:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:57.373 19:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:57.373 19:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:57.374 19:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:57.374 19:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:57.374 19:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:57.374 19:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:57.374 19:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:57.374 19:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:57.374 19:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:57.374 19:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:57.374 19:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:57.374 19:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.374 19:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.940 nvme0n1 00:19:57.940 19:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.940 19:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:57.940 19:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:57.940 19:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.940 19:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.940 19:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.940 19:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:57.940 19:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:57.940 19:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.940 19:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.940 19:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.940 19:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:57.940 19:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:19:57.940 19:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:57.940 19:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:57.940 19:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:57.940 19:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:57.940 19:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWEzYzhlMjAyYjdhYWUyYzQ2ZGVhOGMwOWQ1ZDc5ZDFhMGIxOTVjNTdkZGNiNGI3ac7c6Q==: 00:19:57.940 19:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTNmNWFiMTZmMjVlOWUzZDRkYmMxMjQ0OTYzMzI5ZWYWuVDw: 00:19:57.940 19:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:57.940 19:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:57.940 19:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWEzYzhlMjAyYjdhYWUyYzQ2ZGVhOGMwOWQ1ZDc5ZDFhMGIxOTVjNTdkZGNiNGI3ac7c6Q==: 00:19:57.940 19:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTNmNWFiMTZmMjVlOWUzZDRkYmMxMjQ0OTYzMzI5ZWYWuVDw: ]] 00:19:57.940 19:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTNmNWFiMTZmMjVlOWUzZDRkYmMxMjQ0OTYzMzI5ZWYWuVDw: 00:19:57.940 19:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:19:57.940 19:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:58.200 19:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:58.200 19:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:58.200 19:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:58.200 19:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:58.200 19:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:58.200 19:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.200 19:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.200 19:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.200 19:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:58.200 19:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:58.200 19:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:58.200 19:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:58.200 19:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:58.200 19:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:58.200 19:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:58.200 19:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:58.200 19:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:58.200 19:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:58.200 19:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:58.200 19:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:58.200 19:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.200 19:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.768 nvme0n1 00:19:58.768 19:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.768 19:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:58.768 19:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.768 19:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.768 19:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:58.768 19:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.768 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.768 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:58.768 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.768 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.768 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.768 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:58.768 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:19:58.768 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:58.768 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:58.768 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:58.768 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:58.768 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmZiY2I4NmZkMzdiMWVmNjYwNmI5Y2E5NGI3ODJmYTdjNGM1MDk3OGZkZjNlMjhkMDUzOWRmZjlhY2M2NGRiOaHzdg8=: 00:19:58.768 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:58.768 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:58.768 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:58.768 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmZiY2I4NmZkMzdiMWVmNjYwNmI5Y2E5NGI3ODJmYTdjNGM1MDk3OGZkZjNlMjhkMDUzOWRmZjlhY2M2NGRiOaHzdg8=: 00:19:58.768 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:58.768 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:19:58.768 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:58.768 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:58.768 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:58.768 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:58.768 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:58.768 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:58.768 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.769 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.769 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.769 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:58.769 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:58.769 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:58.769 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:58.769 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:58.769 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:58.769 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:58.769 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:58.769 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:58.769 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:58.769 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:58.769 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:58.769 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.769 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.337 nvme0n1 00:19:59.337 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.337 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:59.337 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.337 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.337 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:59.337 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.337 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.337 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:59.337 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.337 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.337 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.337 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:19:59.337 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:59.337 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:59.337 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:59.337 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:59.337 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWQ1OTRkZTdjNWVlMjEwNGVkM2I2ODMwODkzMDE4ZjY5NTdhMjAxY2RkM2ExNzVlq9HaBQ==: 00:19:59.337 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTEzZThkYTdiMTlmNmVjYTk5ZTMzN2QzZDY2MzhmMzhjOTgwNGUxMTE3ZmY1MGQzTOdqdA==: 00:19:59.337 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:59.337 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:59.337 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWQ1OTRkZTdjNWVlMjEwNGVkM2I2ODMwODkzMDE4ZjY5NTdhMjAxY2RkM2ExNzVlq9HaBQ==: 00:19:59.337 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTEzZThkYTdiMTlmNmVjYTk5ZTMzN2QzZDY2MzhmMzhjOTgwNGUxMTE3ZmY1MGQzTOdqdA==: ]] 00:19:59.337 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTEzZThkYTdiMTlmNmVjYTk5ZTMzN2QzZDY2MzhmMzhjOTgwNGUxMTE3ZmY1MGQzTOdqdA==: 00:19:59.337 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:59.337 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.337 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.337 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.337 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:19:59.338 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:59.338 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:59.338 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:59.338 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:59.338 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:59.338 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:59.338 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:59.338 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:59.338 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:59.338 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:59.338 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:19:59.338 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:19:59.338 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:19:59.338 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:59.338 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:59.338 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:59.338 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:59.338 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:19:59.338 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.338 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.597 request: 00:19:59.597 { 00:19:59.597 "name": "nvme0", 00:19:59.597 "trtype": "tcp", 00:19:59.597 "traddr": "10.0.0.1", 00:19:59.597 "adrfam": "ipv4", 00:19:59.597 "trsvcid": "4420", 00:19:59.597 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:19:59.597 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:19:59.597 "prchk_reftag": false, 00:19:59.597 "prchk_guard": false, 00:19:59.597 "hdgst": false, 00:19:59.597 "ddgst": false, 00:19:59.597 "allow_unrecognized_csi": false, 00:19:59.597 "method": "bdev_nvme_attach_controller", 00:19:59.597 "req_id": 1 00:19:59.597 } 00:19:59.597 Got JSON-RPC error response 00:19:59.597 response: 00:19:59.597 { 00:19:59.597 "code": -5, 00:19:59.597 "message": "Input/output error" 00:19:59.597 } 00:19:59.597 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:59.597 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:19:59.597 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:59.597 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:59.597 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:59.597 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:19:59.597 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.597 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:19:59.597 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.597 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.597 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:19:59.597 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:19:59.597 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:59.597 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:59.597 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:59.597 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:59.597 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:59.597 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:59.597 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:59.597 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:59.597 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:59.597 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:59.597 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:59.597 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:19:59.597 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:59.597 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:59.597 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:59.597 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:59.597 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:59.597 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:59.597 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.597 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.597 request: 00:19:59.597 { 00:19:59.597 "name": "nvme0", 00:19:59.597 "trtype": "tcp", 00:19:59.597 "traddr": "10.0.0.1", 00:19:59.597 "adrfam": "ipv4", 00:19:59.597 "trsvcid": "4420", 00:19:59.597 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:19:59.597 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:19:59.597 "prchk_reftag": false, 00:19:59.597 "prchk_guard": false, 00:19:59.597 "hdgst": false, 00:19:59.597 "ddgst": false, 00:19:59.597 "dhchap_key": "key2", 00:19:59.597 "allow_unrecognized_csi": false, 00:19:59.597 "method": "bdev_nvme_attach_controller", 00:19:59.597 "req_id": 1 00:19:59.597 } 00:19:59.597 Got JSON-RPC error response 00:19:59.597 response: 00:19:59.597 { 00:19:59.597 "code": -5, 00:19:59.597 "message": "Input/output error" 00:19:59.597 } 00:19:59.597 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:59.597 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:19:59.597 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:59.597 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:59.598 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:59.598 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:19:59.598 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:19:59.598 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.598 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.598 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.598 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:19:59.598 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:19:59.598 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:59.598 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:59.598 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:59.598 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:59.598 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:59.598 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:59.598 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:59.598 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:59.598 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:59.598 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:59.598 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:59.598 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:19:59.598 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:59.598 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:59.598 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:59.598 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:59.598 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:59.598 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:59.598 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.598 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.598 request: 00:19:59.598 { 00:19:59.598 "name": "nvme0", 00:19:59.598 "trtype": "tcp", 00:19:59.598 "traddr": "10.0.0.1", 00:19:59.598 "adrfam": "ipv4", 00:19:59.598 "trsvcid": "4420", 00:19:59.598 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:19:59.598 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:19:59.598 "prchk_reftag": false, 00:19:59.598 "prchk_guard": false, 00:19:59.598 "hdgst": false, 00:19:59.598 "ddgst": false, 00:19:59.598 "dhchap_key": "key1", 00:19:59.598 "dhchap_ctrlr_key": "ckey2", 00:19:59.598 "allow_unrecognized_csi": false, 00:19:59.598 "method": "bdev_nvme_attach_controller", 00:19:59.598 "req_id": 1 00:19:59.598 } 00:19:59.598 Got JSON-RPC error response 00:19:59.598 response: 00:19:59.598 { 00:19:59.598 "code": -5, 00:19:59.598 "message": "Input/output error" 00:19:59.598 } 00:19:59.598 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:59.598 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:19:59.598 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:59.598 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:59.598 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:59.598 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:19:59.598 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:59.598 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:59.598 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:59.598 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:59.598 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:59.598 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:59.598 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:59.598 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:59.598 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:59.598 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:59.598 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:59.598 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.598 19:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.857 nvme0n1 00:19:59.857 19:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.857 19:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:19:59.857 19:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:59.857 19:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:59.857 19:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:59.857 19:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:59.857 19:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmQzMTUwOGZkM2U2YWJjZjViOTQ0YTczYjc2YmNlMDXkokTh: 00:19:59.857 19:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Yjc5NzA1MjVmNjIwZTMyYjYzYzIxZDJlNTBhYzhmZDOIToKK: 00:19:59.857 19:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:59.857 19:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:59.857 19:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmQzMTUwOGZkM2U2YWJjZjViOTQ0YTczYjc2YmNlMDXkokTh: 00:19:59.857 19:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Yjc5NzA1MjVmNjIwZTMyYjYzYzIxZDJlNTBhYzhmZDOIToKK: ]] 00:19:59.857 19:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Yjc5NzA1MjVmNjIwZTMyYjYzYzIxZDJlNTBhYzhmZDOIToKK: 00:19:59.857 19:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:59.857 19:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.857 19:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.857 19:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.857 19:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:19:59.857 19:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.857 19:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:19:59.857 19:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.857 19:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.857 19:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.857 19:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:59.857 19:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:19:59.857 19:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:59.857 19:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:59.857 19:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:59.857 19:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:59.857 19:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:59.857 19:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:59.857 19:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.857 19:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.857 request: 00:19:59.857 { 00:19:59.857 "name": "nvme0", 00:19:59.857 "dhchap_key": "key1", 00:19:59.857 "dhchap_ctrlr_key": "ckey2", 00:19:59.857 "method": "bdev_nvme_set_keys", 00:19:59.857 "req_id": 1 00:19:59.857 } 00:19:59.857 Got JSON-RPC error response 00:19:59.857 response: 00:19:59.857 { 00:19:59.857 "code": -13, 00:19:59.857 "message": "Permission denied" 00:19:59.857 } 00:19:59.857 19:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:59.857 19:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:19:59.857 19:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:59.857 19:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:59.857 19:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:59.857 19:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:19:59.857 19:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:19:59.857 19:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.857 19:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.858 19:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.858 19:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:19:59.858 19:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:20:01.233 19:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:20:01.233 19:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.233 19:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:20:01.233 19:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.233 19:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.233 19:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:20:01.233 19:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:20:01.233 19:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:01.233 19:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:01.233 19:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:01.233 19:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:01.233 19:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWQ1OTRkZTdjNWVlMjEwNGVkM2I2ODMwODkzMDE4ZjY5NTdhMjAxY2RkM2ExNzVlq9HaBQ==: 00:20:01.233 19:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTEzZThkYTdiMTlmNmVjYTk5ZTMzN2QzZDY2MzhmMzhjOTgwNGUxMTE3ZmY1MGQzTOdqdA==: 00:20:01.233 19:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:01.233 19:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:01.233 19:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWQ1OTRkZTdjNWVlMjEwNGVkM2I2ODMwODkzMDE4ZjY5NTdhMjAxY2RkM2ExNzVlq9HaBQ==: 00:20:01.233 19:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTEzZThkYTdiMTlmNmVjYTk5ZTMzN2QzZDY2MzhmMzhjOTgwNGUxMTE3ZmY1MGQzTOdqdA==: ]] 00:20:01.233 19:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTEzZThkYTdiMTlmNmVjYTk5ZTMzN2QzZDY2MzhmMzhjOTgwNGUxMTE3ZmY1MGQzTOdqdA==: 00:20:01.233 19:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:20:01.233 19:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:01.233 19:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:01.233 19:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:01.233 19:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:01.233 19:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:01.233 19:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:01.233 19:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:01.233 19:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:01.233 19:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:01.233 19:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:01.233 19:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:20:01.233 19:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.233 19:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.233 nvme0n1 00:20:01.233 19:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.233 19:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:20:01.233 19:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:01.233 19:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:01.233 19:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:01.234 19:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:01.234 19:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmQzMTUwOGZkM2U2YWJjZjViOTQ0YTczYjc2YmNlMDXkokTh: 00:20:01.234 19:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Yjc5NzA1MjVmNjIwZTMyYjYzYzIxZDJlNTBhYzhmZDOIToKK: 00:20:01.234 19:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:01.234 19:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:01.234 19:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmQzMTUwOGZkM2U2YWJjZjViOTQ0YTczYjc2YmNlMDXkokTh: 00:20:01.234 19:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Yjc5NzA1MjVmNjIwZTMyYjYzYzIxZDJlNTBhYzhmZDOIToKK: ]] 00:20:01.234 19:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Yjc5NzA1MjVmNjIwZTMyYjYzYzIxZDJlNTBhYzhmZDOIToKK: 00:20:01.234 19:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:20:01.234 19:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:20:01.234 19:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:20:01.234 19:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:01.234 19:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:01.234 19:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:01.234 19:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:01.234 19:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:20:01.234 19:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.234 19:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.234 request: 00:20:01.234 { 00:20:01.234 "name": "nvme0", 00:20:01.234 "dhchap_key": "key2", 00:20:01.234 "dhchap_ctrlr_key": "ckey1", 00:20:01.234 "method": "bdev_nvme_set_keys", 00:20:01.234 "req_id": 1 00:20:01.234 } 00:20:01.234 Got JSON-RPC error response 00:20:01.234 response: 00:20:01.234 { 00:20:01.234 "code": -13, 00:20:01.234 "message": "Permission denied" 00:20:01.234 } 00:20:01.234 19:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:01.234 19:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:20:01.234 19:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:01.234 19:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:01.234 19:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:01.234 19:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:20:01.234 19:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.234 19:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.234 19:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:20:01.234 19:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.234 19:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:20:01.234 19:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:20:02.194 19:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:20:02.194 19:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:20:02.194 19:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.194 19:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.194 19:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.194 19:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:20:02.194 19:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:20:02.194 19:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:20:02.194 19:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:20:02.194 19:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:02.194 19:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:20:02.194 19:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:02.194 19:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:20:02.194 19:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:02.194 19:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:02.194 rmmod nvme_tcp 00:20:02.194 rmmod nvme_fabrics 00:20:02.194 19:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:02.194 19:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:20:02.194 19:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:20:02.194 19:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 78300 ']' 00:20:02.194 19:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 78300 00:20:02.194 19:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 78300 ']' 00:20:02.194 19:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 78300 00:20:02.194 19:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:20:02.194 19:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:02.194 19:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78300 00:20:02.461 19:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:02.461 killing process with pid 78300 00:20:02.461 19:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:02.461 19:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78300' 00:20:02.461 19:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 78300 00:20:02.461 19:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 78300 00:20:02.461 19:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:02.461 19:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:02.461 19:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:02.461 19:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:20:02.461 19:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:20:02.461 19:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:02.461 19:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:20:02.461 19:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:02.461 19:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:02.461 19:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:02.461 19:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:02.461 19:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:02.461 19:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:02.720 19:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:02.720 19:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:02.720 19:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:02.720 19:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:02.720 19:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:02.720 19:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:02.720 19:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:02.720 19:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:02.720 19:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:02.720 19:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:02.720 19:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:02.720 19:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:02.720 19:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:02.720 19:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@300 -- # return 0 00:20:02.720 19:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:20:02.720 19:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:20:02.720 19:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:20:02.720 19:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:20:02.720 19:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:20:02.720 19:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:02.720 19:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:20:02.720 19:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:20:02.720 19:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:02.720 19:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:20:02.720 19:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:20:02.720 19:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:03.667 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:03.667 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:20:03.667 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:20:03.667 19:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.ryz /tmp/spdk.key-null.4Ph /tmp/spdk.key-sha256.3ox /tmp/spdk.key-sha384.5Ep /tmp/spdk.key-sha512.FIn /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:20:03.667 19:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:03.926 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:03.926 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:03.926 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:04.184 00:20:04.184 real 0m37.854s 00:20:04.184 user 0m34.263s 00:20:04.184 sys 0m3.848s 00:20:04.184 ************************************ 00:20:04.184 END TEST nvmf_auth_host 00:20:04.184 ************************************ 00:20:04.184 19:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:04.184 19:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.184 19:39:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:20:04.184 19:39:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:20:04.184 19:39:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:04.184 19:39:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:04.184 19:39:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.184 ************************************ 00:20:04.184 START TEST nvmf_digest 00:20:04.184 ************************************ 00:20:04.184 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:20:04.184 * Looking for test storage... 00:20:04.184 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:04.184 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:04.184 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lcov --version 00:20:04.184 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:04.443 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:04.443 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:04.443 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:04.443 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:04.443 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:20:04.443 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:20:04.443 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:20:04.443 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:20:04.443 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:20:04.443 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:20:04.443 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:20:04.443 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:04.443 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:20:04.443 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:20:04.443 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:04.443 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:04.443 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:20:04.443 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:20:04.443 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:04.443 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:20:04.443 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:20:04.443 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:20:04.443 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:20:04.443 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:04.443 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:20:04.443 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:20:04.443 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:04.443 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:04.443 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:20:04.443 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:04.443 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:04.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:04.443 --rc genhtml_branch_coverage=1 00:20:04.443 --rc genhtml_function_coverage=1 00:20:04.443 --rc genhtml_legend=1 00:20:04.443 --rc geninfo_all_blocks=1 00:20:04.443 --rc geninfo_unexecuted_blocks=1 00:20:04.443 00:20:04.443 ' 00:20:04.443 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:04.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:04.443 --rc genhtml_branch_coverage=1 00:20:04.443 --rc genhtml_function_coverage=1 00:20:04.443 --rc genhtml_legend=1 00:20:04.443 --rc geninfo_all_blocks=1 00:20:04.443 --rc geninfo_unexecuted_blocks=1 00:20:04.443 00:20:04.443 ' 00:20:04.443 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:04.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:04.443 --rc genhtml_branch_coverage=1 00:20:04.443 --rc genhtml_function_coverage=1 00:20:04.443 --rc genhtml_legend=1 00:20:04.443 --rc geninfo_all_blocks=1 00:20:04.443 --rc geninfo_unexecuted_blocks=1 00:20:04.443 00:20:04.443 ' 00:20:04.443 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:04.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:04.443 --rc genhtml_branch_coverage=1 00:20:04.443 --rc genhtml_function_coverage=1 00:20:04.443 --rc genhtml_legend=1 00:20:04.443 --rc geninfo_all_blocks=1 00:20:04.443 --rc geninfo_unexecuted_blocks=1 00:20:04.443 00:20:04.443 ' 00:20:04.443 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:04.444 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:20:04.444 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:04.444 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:04.444 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:04.444 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:04.444 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:04.444 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:04.444 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:04.444 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:04.444 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:04.444 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:04.444 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:20:04.444 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:20:04.444 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:04.444 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:04.444 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:04.444 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:04.444 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:04.444 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:20:04.444 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:04.444 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:04.444 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:04.444 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:04.444 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:04.444 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:04.444 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:20:04.444 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:04.444 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:20:04.444 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:04.444 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:04.444 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:04.444 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:04.444 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:04.444 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:04.444 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:04.444 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:04.444 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:04.444 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:04.444 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:20:04.444 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:20:04.444 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:20:04.444 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:20:04.444 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:20:04.444 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:04.444 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:04.444 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:04.444 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:04.444 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:04.444 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:04.444 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:04.444 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:04.444 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:20:04.444 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:20:04.444 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:20:04.444 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:20:04.444 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:20:04.444 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@460 -- # nvmf_veth_init 00:20:04.444 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:04.444 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:04.444 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:04.444 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:04.444 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:04.444 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:04.444 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:04.444 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:04.444 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:04.444 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:04.444 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:04.444 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:04.444 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:04.444 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:04.444 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:04.444 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:04.444 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:04.444 Cannot find device "nvmf_init_br" 00:20:04.444 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # true 00:20:04.444 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:04.444 Cannot find device "nvmf_init_br2" 00:20:04.444 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # true 00:20:04.444 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:04.444 Cannot find device "nvmf_tgt_br" 00:20:04.444 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # true 00:20:04.444 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:04.444 Cannot find device "nvmf_tgt_br2" 00:20:04.444 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # true 00:20:04.444 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:04.444 Cannot find device "nvmf_init_br" 00:20:04.444 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # true 00:20:04.444 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:04.444 Cannot find device "nvmf_init_br2" 00:20:04.444 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # true 00:20:04.444 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:04.444 Cannot find device "nvmf_tgt_br" 00:20:04.444 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # true 00:20:04.444 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:04.444 Cannot find device "nvmf_tgt_br2" 00:20:04.444 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # true 00:20:04.444 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:04.444 Cannot find device "nvmf_br" 00:20:04.444 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # true 00:20:04.444 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:04.444 Cannot find device "nvmf_init_if" 00:20:04.444 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # true 00:20:04.444 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:04.444 Cannot find device "nvmf_init_if2" 00:20:04.444 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # true 00:20:04.444 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:04.445 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:04.445 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # true 00:20:04.445 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:04.445 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:04.445 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # true 00:20:04.445 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:04.445 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:04.445 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:04.445 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:04.445 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:04.445 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:04.703 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:04.703 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:04.703 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:04.703 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:04.703 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:04.703 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:04.703 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:04.703 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:04.703 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:04.703 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:04.703 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:04.703 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:04.703 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:04.703 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:04.703 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:04.703 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:04.703 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:04.703 19:39:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:04.703 19:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:04.703 19:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:04.703 19:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:04.703 19:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:04.703 19:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:04.703 19:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:04.703 19:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:04.703 19:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:04.703 19:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:04.703 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:04.703 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:20:04.703 00:20:04.703 --- 10.0.0.3 ping statistics --- 00:20:04.703 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:04.703 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:20:04.704 19:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:04.704 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:04.704 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.074 ms 00:20:04.704 00:20:04.704 --- 10.0.0.4 ping statistics --- 00:20:04.704 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:04.704 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:20:04.704 19:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:04.704 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:04.704 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:20:04.704 00:20:04.704 --- 10.0.0.1 ping statistics --- 00:20:04.704 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:04.704 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:20:04.704 19:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:04.704 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:04.704 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.047 ms 00:20:04.704 00:20:04.704 --- 10.0.0.2 ping statistics --- 00:20:04.704 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:04.704 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:20:04.704 19:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:04.704 19:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@461 -- # return 0 00:20:04.704 19:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:04.704 19:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:04.704 19:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:04.704 19:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:04.704 19:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:04.704 19:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:04.704 19:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:04.704 19:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:20:04.704 19:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:20:04.704 19:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:20:04.704 19:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:04.704 19:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:04.704 19:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:20:04.704 ************************************ 00:20:04.704 START TEST nvmf_digest_clean 00:20:04.704 ************************************ 00:20:04.704 19:39:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:20:04.704 19:39:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:20:04.704 19:39:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:20:04.704 19:39:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:20:04.704 19:39:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:20:04.704 19:39:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:20:04.704 19:39:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:04.704 19:39:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:04.704 19:39:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:04.704 19:39:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=79951 00:20:04.704 19:39:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:20:04.704 19:39:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 79951 00:20:04.704 19:39:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 79951 ']' 00:20:04.704 19:39:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:04.704 19:39:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:04.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:04.704 19:39:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:04.704 19:39:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:04.704 19:39:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:04.962 [2024-12-05 19:39:58.175998] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:20:04.962 [2024-12-05 19:39:58.176089] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:04.962 [2024-12-05 19:39:58.324344] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:04.962 [2024-12-05 19:39:58.385177] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:04.962 [2024-12-05 19:39:58.385259] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:04.962 [2024-12-05 19:39:58.385273] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:04.962 [2024-12-05 19:39:58.385283] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:04.962 [2024-12-05 19:39:58.385292] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:04.962 [2024-12-05 19:39:58.385758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:05.221 19:39:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:05.221 19:39:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:20:05.221 19:39:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:05.221 19:39:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:05.221 19:39:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:05.221 19:39:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:05.221 19:39:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:20:05.221 19:39:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:20:05.221 19:39:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:20:05.221 19:39:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.221 19:39:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:05.221 [2024-12-05 19:39:58.541594] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:05.221 null0 00:20:05.221 [2024-12-05 19:39:58.597569] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:05.221 [2024-12-05 19:39:58.621704] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:05.221 19:39:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.221 19:39:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:20:05.221 19:39:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:20:05.221 19:39:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:20:05.221 19:39:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:20:05.221 19:39:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:20:05.222 19:39:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:20:05.222 19:39:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:20:05.222 19:39:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=79970 00:20:05.222 19:39:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 79970 /var/tmp/bperf.sock 00:20:05.222 19:39:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 79970 ']' 00:20:05.222 19:39:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:05.222 19:39:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:05.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:05.222 19:39:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:05.222 19:39:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:05.222 19:39:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:05.222 19:39:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:20:05.481 [2024-12-05 19:39:58.685555] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:20:05.481 [2024-12-05 19:39:58.685653] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79970 ] 00:20:05.481 [2024-12-05 19:39:58.838462] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:05.481 [2024-12-05 19:39:58.898010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:05.740 19:39:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:05.740 19:39:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:20:05.740 19:39:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:20:05.740 19:39:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:20:05.740 19:39:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:20:05.998 [2024-12-05 19:39:59.216951] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:05.998 19:39:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:05.998 19:39:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:06.257 nvme0n1 00:20:06.257 19:39:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:20:06.257 19:39:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:06.516 Running I/O for 2 seconds... 00:20:08.440 15113.00 IOPS, 59.04 MiB/s [2024-12-05T19:40:01.877Z] 15176.50 IOPS, 59.28 MiB/s 00:20:08.440 Latency(us) 00:20:08.440 [2024-12-05T19:40:01.877Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:08.440 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:20:08.440 nvme0n1 : 2.01 15194.33 59.35 0.00 0.00 8418.75 7506.85 21567.30 00:20:08.440 [2024-12-05T19:40:01.877Z] =================================================================================================================== 00:20:08.440 [2024-12-05T19:40:01.877Z] Total : 15194.33 59.35 0.00 0.00 8418.75 7506.85 21567.30 00:20:08.440 { 00:20:08.440 "results": [ 00:20:08.440 { 00:20:08.440 "job": "nvme0n1", 00:20:08.440 "core_mask": "0x2", 00:20:08.440 "workload": "randread", 00:20:08.440 "status": "finished", 00:20:08.440 "queue_depth": 128, 00:20:08.440 "io_size": 4096, 00:20:08.440 "runtime": 2.006077, 00:20:08.440 "iops": 15194.332022150695, 00:20:08.440 "mibps": 59.35285946152615, 00:20:08.440 "io_failed": 0, 00:20:08.440 "io_timeout": 0, 00:20:08.440 "avg_latency_us": 8418.75273049381, 00:20:08.440 "min_latency_us": 7506.850909090909, 00:20:08.440 "max_latency_us": 21567.30181818182 00:20:08.440 } 00:20:08.440 ], 00:20:08.440 "core_count": 1 00:20:08.440 } 00:20:08.441 19:40:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:20:08.441 19:40:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:20:08.441 19:40:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:20:08.441 19:40:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:20:08.441 19:40:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:20:08.441 | select(.opcode=="crc32c") 00:20:08.441 | "\(.module_name) \(.executed)"' 00:20:08.700 19:40:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:20:08.700 19:40:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:20:08.700 19:40:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:20:08.700 19:40:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:20:08.700 19:40:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 79970 00:20:08.700 19:40:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 79970 ']' 00:20:08.700 19:40:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 79970 00:20:08.700 19:40:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:20:08.700 19:40:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:08.700 19:40:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79970 00:20:08.700 killing process with pid 79970 00:20:08.700 Received shutdown signal, test time was about 2.000000 seconds 00:20:08.700 00:20:08.700 Latency(us) 00:20:08.700 [2024-12-05T19:40:02.137Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:08.700 [2024-12-05T19:40:02.137Z] =================================================================================================================== 00:20:08.700 [2024-12-05T19:40:02.137Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:08.700 19:40:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:08.700 19:40:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:08.700 19:40:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79970' 00:20:08.700 19:40:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 79970 00:20:08.700 19:40:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 79970 00:20:08.958 19:40:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:20:08.958 19:40:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:20:08.958 19:40:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:20:08.958 19:40:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:20:08.958 19:40:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:20:08.958 19:40:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:20:08.958 19:40:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:20:08.958 19:40:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80023 00:20:08.958 19:40:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80023 /var/tmp/bperf.sock 00:20:08.958 19:40:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:20:08.959 19:40:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 80023 ']' 00:20:08.959 19:40:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:08.959 19:40:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:08.959 19:40:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:08.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:08.959 19:40:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:08.959 19:40:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:08.959 [2024-12-05 19:40:02.307400] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:20:08.959 [2024-12-05 19:40:02.307685] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80023 ] 00:20:08.959 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:08.959 Zero copy mechanism will not be used. 00:20:09.217 [2024-12-05 19:40:02.454331] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:09.217 [2024-12-05 19:40:02.507669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:09.217 19:40:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:09.217 19:40:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:20:09.217 19:40:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:20:09.217 19:40:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:20:09.217 19:40:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:20:09.785 [2024-12-05 19:40:02.954497] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:09.785 19:40:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:09.785 19:40:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:10.045 nvme0n1 00:20:10.045 19:40:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:20:10.045 19:40:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:10.304 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:10.304 Zero copy mechanism will not be used. 00:20:10.304 Running I/O for 2 seconds... 00:20:12.177 7216.00 IOPS, 902.00 MiB/s [2024-12-05T19:40:05.614Z] 7384.00 IOPS, 923.00 MiB/s 00:20:12.177 Latency(us) 00:20:12.177 [2024-12-05T19:40:05.614Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:12.177 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:20:12.177 nvme0n1 : 2.00 7381.83 922.73 0.00 0.00 2164.08 1712.87 9949.56 00:20:12.177 [2024-12-05T19:40:05.614Z] =================================================================================================================== 00:20:12.177 [2024-12-05T19:40:05.614Z] Total : 7381.83 922.73 0.00 0.00 2164.08 1712.87 9949.56 00:20:12.177 { 00:20:12.177 "results": [ 00:20:12.177 { 00:20:12.177 "job": "nvme0n1", 00:20:12.177 "core_mask": "0x2", 00:20:12.177 "workload": "randread", 00:20:12.177 "status": "finished", 00:20:12.177 "queue_depth": 16, 00:20:12.177 "io_size": 131072, 00:20:12.177 "runtime": 2.002755, 00:20:12.177 "iops": 7381.831527071459, 00:20:12.177 "mibps": 922.7289408839324, 00:20:12.177 "io_failed": 0, 00:20:12.177 "io_timeout": 0, 00:20:12.177 "avg_latency_us": 2164.081290830382, 00:20:12.177 "min_latency_us": 1712.8727272727272, 00:20:12.177 "max_latency_us": 9949.556363636364 00:20:12.177 } 00:20:12.177 ], 00:20:12.177 "core_count": 1 00:20:12.177 } 00:20:12.177 19:40:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:20:12.177 19:40:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:20:12.177 19:40:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:20:12.177 | select(.opcode=="crc32c") 00:20:12.177 | "\(.module_name) \(.executed)"' 00:20:12.177 19:40:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:20:12.177 19:40:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:20:12.435 19:40:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:20:12.435 19:40:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:20:12.435 19:40:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:20:12.435 19:40:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:20:12.435 19:40:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80023 00:20:12.435 19:40:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 80023 ']' 00:20:12.435 19:40:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 80023 00:20:12.435 19:40:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:20:12.436 19:40:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:12.436 19:40:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80023 00:20:12.436 killing process with pid 80023 00:20:12.436 Received shutdown signal, test time was about 2.000000 seconds 00:20:12.436 00:20:12.436 Latency(us) 00:20:12.436 [2024-12-05T19:40:05.873Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:12.436 [2024-12-05T19:40:05.873Z] =================================================================================================================== 00:20:12.436 [2024-12-05T19:40:05.873Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:12.436 19:40:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:12.436 19:40:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:12.436 19:40:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80023' 00:20:12.436 19:40:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 80023 00:20:12.436 19:40:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 80023 00:20:12.695 19:40:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:20:12.695 19:40:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:20:12.695 19:40:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:20:12.695 19:40:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:20:12.695 19:40:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:20:12.695 19:40:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:20:12.695 19:40:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:20:12.695 19:40:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80081 00:20:12.695 19:40:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80081 /var/tmp/bperf.sock 00:20:12.695 19:40:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:20:12.695 19:40:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 80081 ']' 00:20:12.695 19:40:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:12.695 19:40:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:12.695 19:40:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:12.695 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:12.695 19:40:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:12.695 19:40:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:12.695 [2024-12-05 19:40:06.092977] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:20:12.695 [2024-12-05 19:40:06.093461] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80081 ] 00:20:12.953 [2024-12-05 19:40:06.242507] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:12.953 [2024-12-05 19:40:06.297701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:12.953 19:40:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:12.953 19:40:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:20:12.953 19:40:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:20:12.953 19:40:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:20:12.953 19:40:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:20:13.520 [2024-12-05 19:40:06.691364] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:13.520 19:40:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:13.520 19:40:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:13.778 nvme0n1 00:20:13.778 19:40:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:20:13.778 19:40:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:13.778 Running I/O for 2 seconds... 00:20:16.089 17273.00 IOPS, 67.47 MiB/s [2024-12-05T19:40:09.526Z] 17336.00 IOPS, 67.72 MiB/s 00:20:16.089 Latency(us) 00:20:16.089 [2024-12-05T19:40:09.526Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:16.089 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:16.089 nvme0n1 : 2.01 17382.65 67.90 0.00 0.00 7357.94 2993.80 15192.44 00:20:16.089 [2024-12-05T19:40:09.526Z] =================================================================================================================== 00:20:16.089 [2024-12-05T19:40:09.526Z] Total : 17382.65 67.90 0.00 0.00 7357.94 2993.80 15192.44 00:20:16.089 { 00:20:16.089 "results": [ 00:20:16.089 { 00:20:16.089 "job": "nvme0n1", 00:20:16.089 "core_mask": "0x2", 00:20:16.089 "workload": "randwrite", 00:20:16.089 "status": "finished", 00:20:16.089 "queue_depth": 128, 00:20:16.089 "io_size": 4096, 00:20:16.089 "runtime": 2.009302, 00:20:16.089 "iops": 17382.653279596598, 00:20:16.089 "mibps": 67.90098937342421, 00:20:16.089 "io_failed": 0, 00:20:16.089 "io_timeout": 0, 00:20:16.089 "avg_latency_us": 7357.9419158400515, 00:20:16.089 "min_latency_us": 2993.8036363636365, 00:20:16.089 "max_latency_us": 15192.436363636363 00:20:16.089 } 00:20:16.089 ], 00:20:16.089 "core_count": 1 00:20:16.089 } 00:20:16.089 19:40:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:20:16.089 19:40:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:20:16.089 19:40:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:20:16.089 19:40:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:20:16.089 19:40:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:20:16.089 | select(.opcode=="crc32c") 00:20:16.089 | "\(.module_name) \(.executed)"' 00:20:16.089 19:40:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:20:16.089 19:40:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:20:16.089 19:40:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:20:16.089 19:40:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:20:16.089 19:40:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80081 00:20:16.089 19:40:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 80081 ']' 00:20:16.089 19:40:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 80081 00:20:16.089 19:40:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:20:16.089 19:40:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:16.089 19:40:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80081 00:20:16.089 killing process with pid 80081 00:20:16.089 Received shutdown signal, test time was about 2.000000 seconds 00:20:16.089 00:20:16.089 Latency(us) 00:20:16.089 [2024-12-05T19:40:09.526Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:16.089 [2024-12-05T19:40:09.526Z] =================================================================================================================== 00:20:16.089 [2024-12-05T19:40:09.526Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:16.089 19:40:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:16.089 19:40:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:16.089 19:40:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80081' 00:20:16.089 19:40:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 80081 00:20:16.089 19:40:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 80081 00:20:16.348 19:40:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:20:16.348 19:40:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:20:16.348 19:40:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:20:16.348 19:40:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:20:16.348 19:40:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:20:16.348 19:40:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:20:16.348 19:40:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:20:16.348 19:40:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80129 00:20:16.348 19:40:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80129 /var/tmp/bperf.sock 00:20:16.348 19:40:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:20:16.348 19:40:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 80129 ']' 00:20:16.348 19:40:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:16.348 19:40:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:16.348 19:40:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:16.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:16.348 19:40:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:16.348 19:40:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:16.348 [2024-12-05 19:40:09.757816] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:20:16.348 [2024-12-05 19:40:09.758446] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80129 ] 00:20:16.348 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:16.348 Zero copy mechanism will not be used. 00:20:16.606 [2024-12-05 19:40:09.907045] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:16.606 [2024-12-05 19:40:09.957432] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:16.606 19:40:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:16.606 19:40:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:20:16.606 19:40:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:20:16.606 19:40:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:20:16.606 19:40:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:20:17.171 [2024-12-05 19:40:10.314022] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:17.171 19:40:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:17.171 19:40:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:17.428 nvme0n1 00:20:17.428 19:40:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:20:17.428 19:40:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:17.428 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:17.428 Zero copy mechanism will not be used. 00:20:17.428 Running I/O for 2 seconds... 00:20:19.737 6633.00 IOPS, 829.12 MiB/s [2024-12-05T19:40:13.174Z] 6642.50 IOPS, 830.31 MiB/s 00:20:19.737 Latency(us) 00:20:19.737 [2024-12-05T19:40:13.174Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:19.737 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:20:19.737 nvme0n1 : 2.00 6640.99 830.12 0.00 0.00 2403.93 1630.95 6702.55 00:20:19.737 [2024-12-05T19:40:13.174Z] =================================================================================================================== 00:20:19.737 [2024-12-05T19:40:13.174Z] Total : 6640.99 830.12 0.00 0.00 2403.93 1630.95 6702.55 00:20:19.737 { 00:20:19.737 "results": [ 00:20:19.737 { 00:20:19.737 "job": "nvme0n1", 00:20:19.737 "core_mask": "0x2", 00:20:19.737 "workload": "randwrite", 00:20:19.737 "status": "finished", 00:20:19.737 "queue_depth": 16, 00:20:19.737 "io_size": 131072, 00:20:19.737 "runtime": 2.00407, 00:20:19.737 "iops": 6640.985594315568, 00:20:19.737 "mibps": 830.123199289446, 00:20:19.737 "io_failed": 0, 00:20:19.737 "io_timeout": 0, 00:20:19.738 "avg_latency_us": 2403.932800633884, 00:20:19.738 "min_latency_us": 1630.9527272727273, 00:20:19.738 "max_latency_us": 6702.545454545455 00:20:19.738 } 00:20:19.738 ], 00:20:19.738 "core_count": 1 00:20:19.738 } 00:20:19.738 19:40:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:20:19.738 19:40:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:20:19.738 19:40:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:20:19.738 19:40:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:20:19.738 | select(.opcode=="crc32c") 00:20:19.738 | "\(.module_name) \(.executed)"' 00:20:19.738 19:40:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:20:19.738 19:40:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:20:19.738 19:40:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:20:19.738 19:40:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:20:19.738 19:40:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:20:19.738 19:40:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80129 00:20:19.738 19:40:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 80129 ']' 00:20:19.738 19:40:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 80129 00:20:19.738 19:40:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:20:19.738 19:40:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:19.738 19:40:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80129 00:20:19.997 19:40:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:19.997 19:40:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:19.997 19:40:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80129' 00:20:19.997 killing process with pid 80129 00:20:19.997 19:40:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 80129 00:20:19.997 Received shutdown signal, test time was about 2.000000 seconds 00:20:19.997 00:20:19.997 Latency(us) 00:20:19.997 [2024-12-05T19:40:13.434Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:19.997 [2024-12-05T19:40:13.434Z] =================================================================================================================== 00:20:19.997 [2024-12-05T19:40:13.434Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:19.997 19:40:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 80129 00:20:19.997 19:40:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 79951 00:20:19.997 19:40:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 79951 ']' 00:20:19.997 19:40:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 79951 00:20:19.997 19:40:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:20:19.997 19:40:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:19.997 19:40:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79951 00:20:19.997 killing process with pid 79951 00:20:19.997 19:40:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:19.997 19:40:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:19.997 19:40:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79951' 00:20:19.997 19:40:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 79951 00:20:19.997 19:40:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 79951 00:20:20.256 00:20:20.256 real 0m15.518s 00:20:20.256 user 0m30.320s 00:20:20.256 sys 0m4.464s 00:20:20.256 19:40:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:20.256 19:40:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:20.256 ************************************ 00:20:20.256 END TEST nvmf_digest_clean 00:20:20.256 ************************************ 00:20:20.256 19:40:13 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:20:20.256 19:40:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:20.256 19:40:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:20.256 19:40:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:20:20.256 ************************************ 00:20:20.256 START TEST nvmf_digest_error 00:20:20.256 ************************************ 00:20:20.256 19:40:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:20:20.256 19:40:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:20:20.256 19:40:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:20.256 19:40:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:20.256 19:40:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:20.256 19:40:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=80205 00:20:20.256 19:40:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 80205 00:20:20.256 19:40:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:20:20.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:20.256 19:40:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 80205 ']' 00:20:20.256 19:40:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:20.256 19:40:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:20.256 19:40:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:20.256 19:40:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:20.256 19:40:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:20.516 [2024-12-05 19:40:13.744045] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:20:20.516 [2024-12-05 19:40:13.744462] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:20.516 [2024-12-05 19:40:13.883937] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:20.516 [2024-12-05 19:40:13.933049] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:20.516 [2024-12-05 19:40:13.933431] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:20.516 [2024-12-05 19:40:13.933452] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:20.516 [2024-12-05 19:40:13.933460] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:20.516 [2024-12-05 19:40:13.933467] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:20.516 [2024-12-05 19:40:13.933861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:20.775 19:40:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:20.775 19:40:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:20:20.775 19:40:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:20.775 19:40:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:20.775 19:40:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:20.775 19:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:20.775 19:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:20:20.775 19:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.775 19:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:20.775 [2024-12-05 19:40:14.030323] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:20:20.775 19:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.775 19:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:20:20.775 19:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:20:20.775 19:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.775 19:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:20.775 [2024-12-05 19:40:14.092657] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:20.775 null0 00:20:20.775 [2024-12-05 19:40:14.148718] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:20.775 [2024-12-05 19:40:14.172840] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:20.775 19:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.775 19:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:20:20.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:20.775 19:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:20:20.775 19:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:20:20.775 19:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:20:20.775 19:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:20:20.775 19:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80230 00:20:20.776 19:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80230 /var/tmp/bperf.sock 00:20:20.776 19:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:20:20.776 19:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 80230 ']' 00:20:20.776 19:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:20.776 19:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:20.776 19:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:20.776 19:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:20.776 19:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:21.034 [2024-12-05 19:40:14.225855] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:20:21.034 [2024-12-05 19:40:14.226235] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80230 ] 00:20:21.034 [2024-12-05 19:40:14.368915] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:21.034 [2024-12-05 19:40:14.426255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:21.293 [2024-12-05 19:40:14.483022] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:21.857 19:40:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:21.857 19:40:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:20:21.857 19:40:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:21.857 19:40:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:22.114 19:40:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:20:22.114 19:40:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.114 19:40:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:22.114 19:40:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.114 19:40:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:22.114 19:40:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:22.679 nvme0n1 00:20:22.679 19:40:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:20:22.679 19:40:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.679 19:40:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:22.679 19:40:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.679 19:40:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:20:22.679 19:40:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:22.679 Running I/O for 2 seconds... 00:20:22.679 [2024-12-05 19:40:15.982706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6befb0) 00:20:22.679 [2024-12-05 19:40:15.982754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.679 [2024-12-05 19:40:15.982780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:22.679 [2024-12-05 19:40:15.999413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6befb0) 00:20:22.679 [2024-12-05 19:40:15.999452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8150 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.679 [2024-12-05 19:40:15.999465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:22.679 [2024-12-05 19:40:16.015561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6befb0) 00:20:22.679 [2024-12-05 19:40:16.015598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.679 [2024-12-05 19:40:16.015627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:22.679 [2024-12-05 19:40:16.031773] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6befb0) 00:20:22.679 [2024-12-05 19:40:16.032008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6955 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.679 [2024-12-05 19:40:16.032026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:22.679 [2024-12-05 19:40:16.048876] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6befb0) 00:20:22.679 [2024-12-05 19:40:16.049082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.679 [2024-12-05 19:40:16.049320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:22.679 [2024-12-05 19:40:16.066130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6befb0) 00:20:22.679 [2024-12-05 19:40:16.066321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15646 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.679 [2024-12-05 19:40:16.066521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:22.679 [2024-12-05 19:40:16.082925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6befb0) 00:20:22.679 [2024-12-05 19:40:16.083108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20550 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.679 [2024-12-05 19:40:16.083352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:22.679 [2024-12-05 19:40:16.099444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6befb0) 00:20:22.679 [2024-12-05 19:40:16.099654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16317 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.679 [2024-12-05 19:40:16.099842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:22.936 [2024-12-05 19:40:16.115658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6befb0) 00:20:22.936 [2024-12-05 19:40:16.115856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:3466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.936 [2024-12-05 19:40:16.116005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:22.936 [2024-12-05 19:40:16.132055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6befb0) 00:20:22.936 [2024-12-05 19:40:16.132288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:12992 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.936 [2024-12-05 19:40:16.132409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:22.936 [2024-12-05 19:40:16.150093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6befb0) 00:20:22.936 [2024-12-05 19:40:16.150222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:23406 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.936 [2024-12-05 19:40:16.150363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:22.936 [2024-12-05 19:40:16.167658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6befb0) 00:20:22.936 [2024-12-05 19:40:16.167857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:23350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.936 [2024-12-05 19:40:16.167885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:22.936 [2024-12-05 19:40:16.184975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6befb0) 00:20:22.936 [2024-12-05 19:40:16.185162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:14941 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.936 [2024-12-05 19:40:16.185196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:22.936 [2024-12-05 19:40:16.200851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6befb0) 00:20:22.936 [2024-12-05 19:40:16.201027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:18995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.936 [2024-12-05 19:40:16.201060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:22.937 [2024-12-05 19:40:16.217016] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6befb0) 00:20:22.937 [2024-12-05 19:40:16.217053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:4800 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.937 [2024-12-05 19:40:16.217067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:22.937 [2024-12-05 19:40:16.233589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6befb0) 00:20:22.937 [2024-12-05 19:40:16.233770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.937 [2024-12-05 19:40:16.233804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:22.937 [2024-12-05 19:40:16.249895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6befb0) 00:20:22.937 [2024-12-05 19:40:16.249932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:52 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.937 [2024-12-05 19:40:16.249961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:22.937 [2024-12-05 19:40:16.266048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6befb0) 00:20:22.937 [2024-12-05 19:40:16.266086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:19804 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.937 [2024-12-05 19:40:16.266098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:22.937 [2024-12-05 19:40:16.282008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6befb0) 00:20:22.937 [2024-12-05 19:40:16.282044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:945 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.937 [2024-12-05 19:40:16.282057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:22.937 [2024-12-05 19:40:16.298065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6befb0) 00:20:22.937 [2024-12-05 19:40:16.298101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:3589 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.937 [2024-12-05 19:40:16.298129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:22.937 [2024-12-05 19:40:16.314107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6befb0) 00:20:22.937 [2024-12-05 19:40:16.314153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:20668 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.937 [2024-12-05 19:40:16.314165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:22.937 [2024-12-05 19:40:16.330150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6befb0) 00:20:22.937 [2024-12-05 19:40:16.330201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:23696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.937 [2024-12-05 19:40:16.330230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:22.937 [2024-12-05 19:40:16.346180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6befb0) 00:20:22.937 [2024-12-05 19:40:16.346215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:11679 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.937 [2024-12-05 19:40:16.346243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:22.937 [2024-12-05 19:40:16.362177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6befb0) 00:20:22.937 [2024-12-05 19:40:16.362212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:8070 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.937 [2024-12-05 19:40:16.362240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:23.195 [2024-12-05 19:40:16.378338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6befb0) 00:20:23.195 [2024-12-05 19:40:16.378372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:4750 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.195 [2024-12-05 19:40:16.378384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:23.195 [2024-12-05 19:40:16.395224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6befb0) 00:20:23.195 [2024-12-05 19:40:16.395256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:13462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.195 [2024-12-05 19:40:16.395268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:23.195 [2024-12-05 19:40:16.411314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6befb0) 00:20:23.195 [2024-12-05 19:40:16.411348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:4060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.195 [2024-12-05 19:40:16.411376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:23.195 [2024-12-05 19:40:16.427509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6befb0) 00:20:23.195 [2024-12-05 19:40:16.427547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:13382 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.195 [2024-12-05 19:40:16.427559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:23.195 [2024-12-05 19:40:16.444317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6befb0) 00:20:23.195 [2024-12-05 19:40:16.444501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:2865 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.195 [2024-12-05 19:40:16.444520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:23.195 [2024-12-05 19:40:16.461169] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6befb0) 00:20:23.195 [2024-12-05 19:40:16.461204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:162 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.195 [2024-12-05 19:40:16.461233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:23.195 [2024-12-05 19:40:16.477283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6befb0) 00:20:23.195 [2024-12-05 19:40:16.477318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:17324 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.195 [2024-12-05 19:40:16.477346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:23.195 [2024-12-05 19:40:16.493460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6befb0) 00:20:23.195 [2024-12-05 19:40:16.493637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:9968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.195 [2024-12-05 19:40:16.493670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:23.195 [2024-12-05 19:40:16.509919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6befb0) 00:20:23.196 [2024-12-05 19:40:16.509956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:10529 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.196 [2024-12-05 19:40:16.509985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:23.196 [2024-12-05 19:40:16.526487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6befb0) 00:20:23.196 [2024-12-05 19:40:16.526525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:21595 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.196 [2024-12-05 19:40:16.526537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:23.196 [2024-12-05 19:40:16.542863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6befb0) 00:20:23.196 [2024-12-05 19:40:16.542899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:8767 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.196 [2024-12-05 19:40:16.542927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:23.196 [2024-12-05 19:40:16.559024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6befb0) 00:20:23.196 [2024-12-05 19:40:16.559060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:419 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.196 [2024-12-05 19:40:16.559097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:23.196 [2024-12-05 19:40:16.575211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6befb0) 00:20:23.196 [2024-12-05 19:40:16.575245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:454 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.196 [2024-12-05 19:40:16.575273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:23.196 [2024-12-05 19:40:16.591586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6befb0) 00:20:23.196 [2024-12-05 19:40:16.591640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:8660 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.196 [2024-12-05 19:40:16.591653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:23.196 [2024-12-05 19:40:16.608976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6befb0) 00:20:23.196 [2024-12-05 19:40:16.609141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:15029 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.196 [2024-12-05 19:40:16.609157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:23.196 [2024-12-05 19:40:16.626425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6befb0) 00:20:23.196 [2024-12-05 19:40:16.626463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:3053 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.196 [2024-12-05 19:40:16.626492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:23.454 [2024-12-05 19:40:16.643073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6befb0) 00:20:23.454 [2024-12-05 19:40:16.643139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:2938 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.454 [2024-12-05 19:40:16.643170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:23.454 [2024-12-05 19:40:16.659379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6befb0) 00:20:23.454 [2024-12-05 19:40:16.659414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.454 [2024-12-05 19:40:16.659443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:23.454 [2024-12-05 19:40:16.675886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6befb0) 00:20:23.454 [2024-12-05 19:40:16.675938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:4129 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.454 [2024-12-05 19:40:16.675952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:23.454 [2024-12-05 19:40:16.692399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6befb0) 00:20:23.454 [2024-12-05 19:40:16.692580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:7370 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.454 [2024-12-05 19:40:16.692613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:23.454 [2024-12-05 19:40:16.709119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6befb0) 00:20:23.454 [2024-12-05 19:40:16.709181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:24455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.454 [2024-12-05 19:40:16.709211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:23.455 [2024-12-05 19:40:16.726002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6befb0) 00:20:23.455 [2024-12-05 19:40:16.726053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:24209 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.455 [2024-12-05 19:40:16.726066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:23.455 [2024-12-05 19:40:16.742494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6befb0) 00:20:23.455 [2024-12-05 19:40:16.742529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:4998 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.455 [2024-12-05 19:40:16.742558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:23.455 [2024-12-05 19:40:16.758908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6befb0) 00:20:23.455 [2024-12-05 19:40:16.758941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:14504 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.455 [2024-12-05 19:40:16.758970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:23.455 [2024-12-05 19:40:16.774796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6befb0) 00:20:23.455 [2024-12-05 19:40:16.774830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:38 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.455 [2024-12-05 19:40:16.774858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:23.455 [2024-12-05 19:40:16.791393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6befb0) 00:20:23.455 [2024-12-05 19:40:16.791430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:14385 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.455 [2024-12-05 19:40:16.791443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:23.455 [2024-12-05 19:40:16.808910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6befb0) 00:20:23.455 [2024-12-05 19:40:16.809095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:2514 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.455 [2024-12-05 19:40:16.809127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:23.455 [2024-12-05 19:40:16.825705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6befb0) 00:20:23.455 [2024-12-05 19:40:16.825742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:15205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.455 [2024-12-05 19:40:16.825771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:23.455 [2024-12-05 19:40:16.843270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6befb0) 00:20:23.455 [2024-12-05 19:40:16.843307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:20097 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.455 [2024-12-05 19:40:16.843320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:23.455 [2024-12-05 19:40:16.860777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6befb0) 00:20:23.455 [2024-12-05 19:40:16.860812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:24646 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.455 [2024-12-05 19:40:16.860841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:23.455 [2024-12-05 19:40:16.877808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6befb0) 00:20:23.455 [2024-12-05 19:40:16.877843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:4585 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.455 [2024-12-05 19:40:16.877872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:23.715 [2024-12-05 19:40:16.894453] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6befb0) 00:20:23.715 [2024-12-05 19:40:16.894488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:20046 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.715 [2024-12-05 19:40:16.894516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:23.715 [2024-12-05 19:40:16.910731] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6befb0) 00:20:23.715 [2024-12-05 19:40:16.910799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:6321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.715 [2024-12-05 19:40:16.910812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:23.715 [2024-12-05 19:40:16.926992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6befb0) 00:20:23.715 [2024-12-05 19:40:16.927046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:21275 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.715 [2024-12-05 19:40:16.927075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:23.715 [2024-12-05 19:40:16.942980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6befb0) 00:20:23.715 [2024-12-05 19:40:16.943035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:2716 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.715 [2024-12-05 19:40:16.943065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:23.715 15181.00 IOPS, 59.30 MiB/s [2024-12-05T19:40:17.152Z] [2024-12-05 19:40:16.960734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6befb0) 00:20:23.715 [2024-12-05 19:40:16.960965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:11809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.715 [2024-12-05 19:40:16.960984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:23.715 [2024-12-05 19:40:16.977439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6befb0) 00:20:23.715 [2024-12-05 19:40:16.977479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:6079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.715 [2024-12-05 19:40:16.977508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:23.715 [2024-12-05 19:40:16.993800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6befb0) 00:20:23.715 [2024-12-05 19:40:16.993835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:1888 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.715 [2024-12-05 19:40:16.993848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:23.715 [2024-12-05 19:40:17.010662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6befb0) 00:20:23.715 [2024-12-05 19:40:17.010697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:1196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.715 [2024-12-05 19:40:17.010725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:23.715 [2024-12-05 19:40:17.034313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6befb0) 00:20:23.715 [2024-12-05 19:40:17.034349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:9483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.715 [2024-12-05 19:40:17.034378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:23.715 [2024-12-05 19:40:17.050765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6befb0) 00:20:23.715 [2024-12-05 19:40:17.050803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:4518 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.715 [2024-12-05 19:40:17.050832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:23.715 [2024-12-05 19:40:17.067805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6befb0) 00:20:23.715 [2024-12-05 19:40:17.067842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:1086 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.715 [2024-12-05 19:40:17.067854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:23.715 [2024-12-05 19:40:17.084673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6befb0) 00:20:23.715 [2024-12-05 19:40:17.084864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:24780 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.715 [2024-12-05 19:40:17.084898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:23.715 [2024-12-05 19:40:17.100663] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6befb0) 00:20:23.715 [2024-12-05 19:40:17.100699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:19627 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.715 [2024-12-05 19:40:17.100728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:23.715 [2024-12-05 19:40:17.116434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6befb0) 00:20:23.715 [2024-12-05 19:40:17.116468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:6990 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.716 [2024-12-05 19:40:17.116497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:23.716 [2024-12-05 19:40:17.132132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6befb0) 00:20:23.716 [2024-12-05 19:40:17.132167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:1865 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.716 [2024-12-05 19:40:17.132180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:23.716 [2024-12-05 19:40:17.147846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6befb0) 00:20:23.716 [2024-12-05 19:40:17.147893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:22939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.716 [2024-12-05 19:40:17.147908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:23.975 [2024-12-05 19:40:17.164168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6befb0) 00:20:23.975 [2024-12-05 19:40:17.164206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:17420 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.975 [2024-12-05 19:40:17.164219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:23.975 [2024-12-05 19:40:17.181892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6befb0) 00:20:23.975 [2024-12-05 19:40:17.182074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:657 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.975 [2024-12-05 19:40:17.182107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:23.975 [2024-12-05 19:40:17.199316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6befb0) 00:20:23.975 [2024-12-05 19:40:17.199351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:21128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.975 [2024-12-05 19:40:17.199380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:23.975 [2024-12-05 19:40:17.214903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6befb0) 00:20:23.975 [2024-12-05 19:40:17.214938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:23981 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.975 [2024-12-05 19:40:17.214967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:23.976 [2024-12-05 19:40:17.230700] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6befb0) 00:20:23.976 [2024-12-05 19:40:17.230735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:12716 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.976 [2024-12-05 19:40:17.230763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:23.976 [2024-12-05 19:40:17.246663] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6befb0) 00:20:23.976 [2024-12-05 19:40:17.246698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:23609 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.976 [2024-12-05 19:40:17.246727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:23.976 [2024-12-05 19:40:17.262545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6befb0) 00:20:23.976 [2024-12-05 19:40:17.262579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:24175 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.976 [2024-12-05 19:40:17.262607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:23.976 [2024-12-05 19:40:17.277995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6befb0) 00:20:23.976 [2024-12-05 19:40:17.278029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:21889 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.976 [2024-12-05 19:40:17.278058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:23.976 [2024-12-05 19:40:17.293523] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6befb0) 00:20:23.976 [2024-12-05 19:40:17.293573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:5499 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.976 [2024-12-05 19:40:17.293600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:23.976 [2024-12-05 19:40:17.309464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6befb0) 00:20:23.976 [2024-12-05 19:40:17.309530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:4607 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.976 [2024-12-05 19:40:17.309560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:23.976 [2024-12-05 19:40:17.325446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6befb0) 00:20:23.976 [2024-12-05 19:40:17.325496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:3043 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.976 [2024-12-05 19:40:17.325509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:23.976 [2024-12-05 19:40:17.341280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6befb0) 00:20:23.976 [2024-12-05 19:40:17.341330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:21898 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.976 [2024-12-05 19:40:17.341343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:23.976 [2024-12-05 19:40:17.357209] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6befb0) 00:20:23.976 [2024-12-05 19:40:17.357268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:25300 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.976 [2024-12-05 19:40:17.357281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:23.976 [2024-12-05 19:40:17.373045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6befb0) 00:20:23.976 [2024-12-05 19:40:17.373289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:24716 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.976 [2024-12-05 19:40:17.373307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:23.976 [2024-12-05 19:40:17.389491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6befb0) 00:20:23.976 [2024-12-05 19:40:17.389699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:16956 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.976 [2024-12-05 19:40:17.389888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:23.976 [2024-12-05 19:40:17.405947] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6befb0) 00:20:23.976 [2024-12-05 19:40:17.406156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:43 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.976 [2024-12-05 19:40:17.406359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:24.235 [2024-12-05 19:40:17.422319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6befb0) 00:20:24.235 [2024-12-05 19:40:17.422512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:24196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.235 [2024-12-05 19:40:17.422699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:24.235 [2024-12-05 19:40:17.438603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6befb0) 00:20:24.235 [2024-12-05 19:40:17.438797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:22907 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.235 [2024-12-05 19:40:17.438992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:24.235 [2024-12-05 19:40:17.455144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6befb0) 00:20:24.235 [2024-12-05 19:40:17.455337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:14653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.235 [2024-12-05 19:40:17.455469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:24.235 [2024-12-05 19:40:17.471382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6befb0) 00:20:24.235 [2024-12-05 19:40:17.471570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:1971 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.235 [2024-12-05 19:40:17.471760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:24.235 [2024-12-05 19:40:17.487523] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6befb0) 00:20:24.235 [2024-12-05 19:40:17.487718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:15564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.235 [2024-12-05 19:40:17.487850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:24.235 [2024-12-05 19:40:17.503943] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6befb0) 00:20:24.235 [2024-12-05 19:40:17.504156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:19797 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.235 [2024-12-05 19:40:17.504345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:24.235 [2024-12-05 19:40:17.520411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6befb0) 00:20:24.235 [2024-12-05 19:40:17.520628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:11037 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.235 [2024-12-05 19:40:17.520763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:24.235 [2024-12-05 19:40:17.537418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6befb0) 00:20:24.235 [2024-12-05 19:40:17.537731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:7267 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.235 [2024-12-05 19:40:17.537830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:24.235 [2024-12-05 19:40:17.553944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6befb0) 00:20:24.235 [2024-12-05 19:40:17.553998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:5974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.235 [2024-12-05 19:40:17.554012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:24.235 [2024-12-05 19:40:17.569874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6befb0) 00:20:24.235 [2024-12-05 19:40:17.570059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:3474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.235 [2024-12-05 19:40:17.570104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:24.235 [2024-12-05 19:40:17.585917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6befb0) 00:20:24.235 [2024-12-05 19:40:17.585953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:10881 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.235 [2024-12-05 19:40:17.585966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:24.235 [2024-12-05 19:40:17.601602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6befb0) 00:20:24.235 [2024-12-05 19:40:17.601636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:7792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.235 [2024-12-05 19:40:17.601665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:24.235 [2024-12-05 19:40:17.617241] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6befb0) 00:20:24.235 [2024-12-05 19:40:17.617274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:5953 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.235 [2024-12-05 19:40:17.617302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:24.235 [2024-12-05 19:40:17.632728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6befb0) 00:20:24.235 [2024-12-05 19:40:17.632921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:11932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.235 [2024-12-05 19:40:17.632939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:24.235 [2024-12-05 19:40:17.648522] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6befb0) 00:20:24.235 [2024-12-05 19:40:17.648557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:15486 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.235 [2024-12-05 19:40:17.648584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:24.235 [2024-12-05 19:40:17.664551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6befb0) 00:20:24.235 [2024-12-05 19:40:17.664587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:17400 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.235 [2024-12-05 19:40:17.664615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:24.495 [2024-12-05 19:40:17.680139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6befb0) 00:20:24.495 [2024-12-05 19:40:17.680172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:19226 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.495 [2024-12-05 19:40:17.680185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:24.495 [2024-12-05 19:40:17.695554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6befb0) 00:20:24.495 [2024-12-05 19:40:17.695589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:24015 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.495 [2024-12-05 19:40:17.695617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:24.495 [2024-12-05 19:40:17.711145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6befb0) 00:20:24.495 [2024-12-05 19:40:17.711194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:2715 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.495 [2024-12-05 19:40:17.711221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:24.495 [2024-12-05 19:40:17.726596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6befb0) 00:20:24.495 [2024-12-05 19:40:17.726773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13071 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.496 [2024-12-05 19:40:17.726805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:24.496 [2024-12-05 19:40:17.742443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6befb0) 00:20:24.496 [2024-12-05 19:40:17.742479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:9438 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.496 [2024-12-05 19:40:17.742507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:24.496 [2024-12-05 19:40:17.759086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6befb0) 00:20:24.496 [2024-12-05 19:40:17.759168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:18290 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.496 [2024-12-05 19:40:17.759198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:24.496 [2024-12-05 19:40:17.776758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6befb0) 00:20:24.496 [2024-12-05 19:40:17.777035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:8396 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.496 [2024-12-05 19:40:17.777053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:24.496 [2024-12-05 19:40:17.794655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6befb0) 00:20:24.496 [2024-12-05 19:40:17.794876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:16308 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.496 [2024-12-05 19:40:17.794896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:24.496 [2024-12-05 19:40:17.811322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6befb0) 00:20:24.496 [2024-12-05 19:40:17.811370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:24572 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.496 [2024-12-05 19:40:17.811401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:24.496 [2024-12-05 19:40:17.827776] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6befb0) 00:20:24.496 [2024-12-05 19:40:17.827812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:4287 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.496 [2024-12-05 19:40:17.827825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:24.496 [2024-12-05 19:40:17.843985] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6befb0) 00:20:24.496 [2024-12-05 19:40:17.844188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:22390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.496 [2024-12-05 19:40:17.844206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:24.496 [2024-12-05 19:40:17.860504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6befb0) 00:20:24.496 [2024-12-05 19:40:17.860540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:14810 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.496 [2024-12-05 19:40:17.860569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:24.496 [2024-12-05 19:40:17.876945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6befb0) 00:20:24.496 [2024-12-05 19:40:17.876981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:20305 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.496 [2024-12-05 19:40:17.877009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:24.496 [2024-12-05 19:40:17.893268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6befb0) 00:20:24.496 [2024-12-05 19:40:17.893303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:25055 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.496 [2024-12-05 19:40:17.893332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:24.496 [2024-12-05 19:40:17.909460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6befb0) 00:20:24.496 [2024-12-05 19:40:17.909492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18800 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.496 [2024-12-05 19:40:17.909504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:24.496 [2024-12-05 19:40:17.925816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6befb0) 00:20:24.496 [2024-12-05 19:40:17.925999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:14180 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.496 [2024-12-05 19:40:17.926032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:24.755 [2024-12-05 19:40:17.941832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6befb0) 00:20:24.755 [2024-12-05 19:40:17.941873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.755 [2024-12-05 19:40:17.941885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:24.755 [2024-12-05 19:40:17.959668] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6befb0) 00:20:24.755 [2024-12-05 19:40:17.959704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7149 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.756 [2024-12-05 19:40:17.959734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:24.756 15370.50 IOPS, 60.04 MiB/s 00:20:24.756 Latency(us) 00:20:24.756 [2024-12-05T19:40:18.193Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:24.756 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:20:24.756 nvme0n1 : 2.01 15408.29 60.19 0.00 0.00 8300.71 7536.64 31695.59 00:20:24.756 [2024-12-05T19:40:18.193Z] =================================================================================================================== 00:20:24.756 [2024-12-05T19:40:18.193Z] Total : 15408.29 60.19 0.00 0.00 8300.71 7536.64 31695.59 00:20:24.756 { 00:20:24.756 "results": [ 00:20:24.756 { 00:20:24.756 "job": "nvme0n1", 00:20:24.756 "core_mask": "0x2", 00:20:24.756 "workload": "randread", 00:20:24.756 "status": "finished", 00:20:24.756 "queue_depth": 128, 00:20:24.756 "io_size": 4096, 00:20:24.756 "runtime": 2.011579, 00:20:24.756 "iops": 15408.293683718113, 00:20:24.756 "mibps": 60.18864720202388, 00:20:24.756 "io_failed": 0, 00:20:24.756 "io_timeout": 0, 00:20:24.756 "avg_latency_us": 8300.7078363959, 00:20:24.756 "min_latency_us": 7536.64, 00:20:24.756 "max_latency_us": 31695.592727272728 00:20:24.756 } 00:20:24.756 ], 00:20:24.756 "core_count": 1 00:20:24.756 } 00:20:24.756 19:40:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:20:24.756 19:40:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:20:24.756 19:40:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:20:24.756 | .driver_specific 00:20:24.756 | .nvme_error 00:20:24.756 | .status_code 00:20:24.756 | .command_transient_transport_error' 00:20:24.756 19:40:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:20:25.016 19:40:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 121 > 0 )) 00:20:25.016 19:40:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80230 00:20:25.016 19:40:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 80230 ']' 00:20:25.016 19:40:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 80230 00:20:25.016 19:40:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:20:25.016 19:40:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:25.016 19:40:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80230 00:20:25.016 killing process with pid 80230 00:20:25.016 Received shutdown signal, test time was about 2.000000 seconds 00:20:25.016 00:20:25.016 Latency(us) 00:20:25.016 [2024-12-05T19:40:18.453Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:25.016 [2024-12-05T19:40:18.453Z] =================================================================================================================== 00:20:25.016 [2024-12-05T19:40:18.453Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:25.016 19:40:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:25.016 19:40:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:25.016 19:40:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80230' 00:20:25.016 19:40:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 80230 00:20:25.016 19:40:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 80230 00:20:25.279 19:40:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:20:25.279 19:40:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:20:25.279 19:40:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:20:25.279 19:40:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:20:25.279 19:40:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:20:25.279 19:40:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80290 00:20:25.279 19:40:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:20:25.279 19:40:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80290 /var/tmp/bperf.sock 00:20:25.279 19:40:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 80290 ']' 00:20:25.279 19:40:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:25.279 19:40:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:25.279 19:40:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:25.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:25.279 19:40:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:25.279 19:40:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:25.279 [2024-12-05 19:40:18.553704] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:20:25.279 [2024-12-05 19:40:18.554092] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80290 ] 00:20:25.279 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:25.279 Zero copy mechanism will not be used. 00:20:25.279 [2024-12-05 19:40:18.700608] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:25.538 [2024-12-05 19:40:18.755596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:25.538 [2024-12-05 19:40:18.808082] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:26.473 19:40:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:26.473 19:40:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:20:26.473 19:40:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:26.473 19:40:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:26.473 19:40:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:20:26.473 19:40:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.473 19:40:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:26.473 19:40:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.473 19:40:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:26.473 19:40:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:26.731 nvme0n1 00:20:26.731 19:40:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:20:26.731 19:40:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.731 19:40:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:26.731 19:40:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.731 19:40:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:20:26.731 19:40:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:26.991 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:26.991 Zero copy mechanism will not be used. 00:20:26.991 Running I/O for 2 seconds... 00:20:26.991 [2024-12-05 19:40:20.273261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:26.991 [2024-12-05 19:40:20.273326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.991 [2024-12-05 19:40:20.273342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:26.991 [2024-12-05 19:40:20.277457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:26.991 [2024-12-05 19:40:20.277509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.991 [2024-12-05 19:40:20.277538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:26.991 [2024-12-05 19:40:20.281465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:26.991 [2024-12-05 19:40:20.281502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.991 [2024-12-05 19:40:20.281515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:26.991 [2024-12-05 19:40:20.285416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:26.991 [2024-12-05 19:40:20.285451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.991 [2024-12-05 19:40:20.285481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:26.991 [2024-12-05 19:40:20.289395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:26.991 [2024-12-05 19:40:20.289432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.991 [2024-12-05 19:40:20.289462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:26.991 [2024-12-05 19:40:20.293510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:26.991 [2024-12-05 19:40:20.293545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.991 [2024-12-05 19:40:20.293574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:26.991 [2024-12-05 19:40:20.297540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:26.991 [2024-12-05 19:40:20.297576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.991 [2024-12-05 19:40:20.297605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:26.991 [2024-12-05 19:40:20.301625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:26.991 [2024-12-05 19:40:20.301660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.991 [2024-12-05 19:40:20.301689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:26.991 [2024-12-05 19:40:20.305690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:26.991 [2024-12-05 19:40:20.305725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.991 [2024-12-05 19:40:20.305760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:26.991 [2024-12-05 19:40:20.309661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:26.991 [2024-12-05 19:40:20.309696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.991 [2024-12-05 19:40:20.309725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:26.991 [2024-12-05 19:40:20.313652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:26.991 [2024-12-05 19:40:20.313687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.991 [2024-12-05 19:40:20.313716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:26.991 [2024-12-05 19:40:20.317539] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:26.991 [2024-12-05 19:40:20.317573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.991 [2024-12-05 19:40:20.317601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:26.991 [2024-12-05 19:40:20.321466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:26.991 [2024-12-05 19:40:20.321499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.991 [2024-12-05 19:40:20.321527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:26.991 [2024-12-05 19:40:20.325573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:26.991 [2024-12-05 19:40:20.325612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.991 [2024-12-05 19:40:20.325625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:26.991 [2024-12-05 19:40:20.329594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:26.991 [2024-12-05 19:40:20.329629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.991 [2024-12-05 19:40:20.329658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:26.991 [2024-12-05 19:40:20.333826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:26.991 [2024-12-05 19:40:20.333862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.991 [2024-12-05 19:40:20.333874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:26.991 [2024-12-05 19:40:20.337942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:26.991 [2024-12-05 19:40:20.337977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.991 [2024-12-05 19:40:20.338005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:26.991 [2024-12-05 19:40:20.342110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:26.991 [2024-12-05 19:40:20.342169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.991 [2024-12-05 19:40:20.342183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:26.992 [2024-12-05 19:40:20.346101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:26.992 [2024-12-05 19:40:20.346143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.992 [2024-12-05 19:40:20.346172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:26.992 [2024-12-05 19:40:20.350008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:26.992 [2024-12-05 19:40:20.350042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.992 [2024-12-05 19:40:20.350071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:26.992 [2024-12-05 19:40:20.354085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:26.992 [2024-12-05 19:40:20.354147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.992 [2024-12-05 19:40:20.354178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:26.992 [2024-12-05 19:40:20.358043] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:26.992 [2024-12-05 19:40:20.358077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.992 [2024-12-05 19:40:20.358106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:26.992 [2024-12-05 19:40:20.362205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:26.992 [2024-12-05 19:40:20.362239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.992 [2024-12-05 19:40:20.362267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:26.992 [2024-12-05 19:40:20.366238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:26.992 [2024-12-05 19:40:20.366271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.992 [2024-12-05 19:40:20.366300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:26.992 [2024-12-05 19:40:20.370473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:26.992 [2024-12-05 19:40:20.370508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.992 [2024-12-05 19:40:20.370537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:26.992 [2024-12-05 19:40:20.374711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:26.992 [2024-12-05 19:40:20.374758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.992 [2024-12-05 19:40:20.374771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:26.992 [2024-12-05 19:40:20.378795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:26.992 [2024-12-05 19:40:20.378829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.992 [2024-12-05 19:40:20.378841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:26.992 [2024-12-05 19:40:20.382810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:26.992 [2024-12-05 19:40:20.382845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.992 [2024-12-05 19:40:20.382873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:26.992 [2024-12-05 19:40:20.386787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:26.992 [2024-12-05 19:40:20.386820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.992 [2024-12-05 19:40:20.386848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:26.992 [2024-12-05 19:40:20.390956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:26.992 [2024-12-05 19:40:20.390991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.992 [2024-12-05 19:40:20.391019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:26.992 [2024-12-05 19:40:20.394972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:26.992 [2024-12-05 19:40:20.395008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.992 [2024-12-05 19:40:20.395037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:26.992 [2024-12-05 19:40:20.398904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:26.992 [2024-12-05 19:40:20.398938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.992 [2024-12-05 19:40:20.398967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:26.992 [2024-12-05 19:40:20.402922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:26.992 [2024-12-05 19:40:20.402955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.992 [2024-12-05 19:40:20.402984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:26.992 [2024-12-05 19:40:20.406915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:26.992 [2024-12-05 19:40:20.406950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.992 [2024-12-05 19:40:20.406978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:26.992 [2024-12-05 19:40:20.410908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:26.992 [2024-12-05 19:40:20.410942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.992 [2024-12-05 19:40:20.410971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:26.992 [2024-12-05 19:40:20.414922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:26.992 [2024-12-05 19:40:20.414957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.992 [2024-12-05 19:40:20.414985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:26.992 [2024-12-05 19:40:20.418780] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:26.992 [2024-12-05 19:40:20.418814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.992 [2024-12-05 19:40:20.418843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:26.992 [2024-12-05 19:40:20.422844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:26.992 [2024-12-05 19:40:20.422878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.992 [2024-12-05 19:40:20.422907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:27.252 [2024-12-05 19:40:20.427032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.252 [2024-12-05 19:40:20.427067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.252 [2024-12-05 19:40:20.427080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:27.252 [2024-12-05 19:40:20.431187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.252 [2024-12-05 19:40:20.431239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.252 [2024-12-05 19:40:20.431268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:27.252 [2024-12-05 19:40:20.435425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.252 [2024-12-05 19:40:20.435460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.252 [2024-12-05 19:40:20.435490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:27.252 [2024-12-05 19:40:20.439586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.252 [2024-12-05 19:40:20.439621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.252 [2024-12-05 19:40:20.439650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:27.252 [2024-12-05 19:40:20.444094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.252 [2024-12-05 19:40:20.444148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.252 [2024-12-05 19:40:20.444163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:27.252 [2024-12-05 19:40:20.448427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.252 [2024-12-05 19:40:20.448465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.252 [2024-12-05 19:40:20.448494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:27.252 [2024-12-05 19:40:20.452815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.252 [2024-12-05 19:40:20.452850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.252 [2024-12-05 19:40:20.452875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:27.252 [2024-12-05 19:40:20.457139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.252 [2024-12-05 19:40:20.457197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.252 [2024-12-05 19:40:20.457210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:27.252 [2024-12-05 19:40:20.461404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.252 [2024-12-05 19:40:20.461438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.252 [2024-12-05 19:40:20.461466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:27.252 [2024-12-05 19:40:20.465674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.252 [2024-12-05 19:40:20.465707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.252 [2024-12-05 19:40:20.465736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:27.252 [2024-12-05 19:40:20.469826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.252 [2024-12-05 19:40:20.469862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.252 [2024-12-05 19:40:20.469890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:27.252 [2024-12-05 19:40:20.473990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.252 [2024-12-05 19:40:20.474025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.252 [2024-12-05 19:40:20.474054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:27.252 [2024-12-05 19:40:20.478281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.252 [2024-12-05 19:40:20.478315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.252 [2024-12-05 19:40:20.478344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:27.252 [2024-12-05 19:40:20.482254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.252 [2024-12-05 19:40:20.482286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.253 [2024-12-05 19:40:20.482316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:27.253 [2024-12-05 19:40:20.486378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.253 [2024-12-05 19:40:20.486413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.253 [2024-12-05 19:40:20.486442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:27.253 [2024-12-05 19:40:20.490471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.253 [2024-12-05 19:40:20.490506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.253 [2024-12-05 19:40:20.490535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:27.253 [2024-12-05 19:40:20.494891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.253 [2024-12-05 19:40:20.494928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.253 [2024-12-05 19:40:20.494958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:27.253 [2024-12-05 19:40:20.499025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.253 [2024-12-05 19:40:20.499060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.253 [2024-12-05 19:40:20.499089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:27.253 [2024-12-05 19:40:20.503178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.253 [2024-12-05 19:40:20.503211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.253 [2024-12-05 19:40:20.503240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:27.253 [2024-12-05 19:40:20.507139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.253 [2024-12-05 19:40:20.507173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.253 [2024-12-05 19:40:20.507201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:27.253 [2024-12-05 19:40:20.511078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.253 [2024-12-05 19:40:20.511161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.253 [2024-12-05 19:40:20.511175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:27.253 [2024-12-05 19:40:20.515232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.253 [2024-12-05 19:40:20.515265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.253 [2024-12-05 19:40:20.515294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:27.253 [2024-12-05 19:40:20.519054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.253 [2024-12-05 19:40:20.519089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.253 [2024-12-05 19:40:20.519117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:27.253 [2024-12-05 19:40:20.523013] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.253 [2024-12-05 19:40:20.523047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.253 [2024-12-05 19:40:20.523076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:27.253 [2024-12-05 19:40:20.527102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.253 [2024-12-05 19:40:20.527147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.253 [2024-12-05 19:40:20.527179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:27.253 [2024-12-05 19:40:20.531160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.253 [2024-12-05 19:40:20.531218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.253 [2024-12-05 19:40:20.531230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:27.253 [2024-12-05 19:40:20.535205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.253 [2024-12-05 19:40:20.535232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.253 [2024-12-05 19:40:20.535244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:27.253 [2024-12-05 19:40:20.539152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.253 [2024-12-05 19:40:20.539184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.253 [2024-12-05 19:40:20.539214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:27.253 [2024-12-05 19:40:20.543160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.253 [2024-12-05 19:40:20.543196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.253 [2024-12-05 19:40:20.543224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:27.253 [2024-12-05 19:40:20.547047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.253 [2024-12-05 19:40:20.547083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.253 [2024-12-05 19:40:20.547112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:27.253 [2024-12-05 19:40:20.551218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.253 [2024-12-05 19:40:20.551254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.253 [2024-12-05 19:40:20.551299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:27.253 [2024-12-05 19:40:20.555342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.253 [2024-12-05 19:40:20.555376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.253 [2024-12-05 19:40:20.555405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:27.253 [2024-12-05 19:40:20.559325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.253 [2024-12-05 19:40:20.559359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.253 [2024-12-05 19:40:20.559388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:27.253 [2024-12-05 19:40:20.563463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.253 [2024-12-05 19:40:20.563513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.253 [2024-12-05 19:40:20.563542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:27.253 [2024-12-05 19:40:20.567648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.253 [2024-12-05 19:40:20.567682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.253 [2024-12-05 19:40:20.567717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:27.253 [2024-12-05 19:40:20.571790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.253 [2024-12-05 19:40:20.571824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.253 [2024-12-05 19:40:20.571853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:27.253 [2024-12-05 19:40:20.576033] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.253 [2024-12-05 19:40:20.576071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.253 [2024-12-05 19:40:20.576085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:27.253 [2024-12-05 19:40:20.580172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.253 [2024-12-05 19:40:20.580224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.253 [2024-12-05 19:40:20.580252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:27.253 [2024-12-05 19:40:20.584173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.253 [2024-12-05 19:40:20.584223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.253 [2024-12-05 19:40:20.584250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:27.253 [2024-12-05 19:40:20.588365] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.253 [2024-12-05 19:40:20.588399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.253 [2024-12-05 19:40:20.588429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:27.253 [2024-12-05 19:40:20.592510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.253 [2024-12-05 19:40:20.592543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.254 [2024-12-05 19:40:20.592572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:27.254 [2024-12-05 19:40:20.596572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.254 [2024-12-05 19:40:20.596607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.254 [2024-12-05 19:40:20.596637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:27.254 [2024-12-05 19:40:20.600708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.254 [2024-12-05 19:40:20.600743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.254 [2024-12-05 19:40:20.600772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:27.254 [2024-12-05 19:40:20.604908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.254 [2024-12-05 19:40:20.604943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.254 [2024-12-05 19:40:20.604972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:27.254 [2024-12-05 19:40:20.608972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.254 [2024-12-05 19:40:20.609189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.254 [2024-12-05 19:40:20.609225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:27.254 [2024-12-05 19:40:20.613448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.254 [2024-12-05 19:40:20.613484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.254 [2024-12-05 19:40:20.613513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:27.254 [2024-12-05 19:40:20.617586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.254 [2024-12-05 19:40:20.617622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.254 [2024-12-05 19:40:20.617652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:27.254 [2024-12-05 19:40:20.621729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.254 [2024-12-05 19:40:20.621764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.254 [2024-12-05 19:40:20.621793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:27.254 [2024-12-05 19:40:20.625901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.254 [2024-12-05 19:40:20.625936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.254 [2024-12-05 19:40:20.625965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:27.254 [2024-12-05 19:40:20.629975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.254 [2024-12-05 19:40:20.630010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.254 [2024-12-05 19:40:20.630038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:27.254 [2024-12-05 19:40:20.634071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.254 [2024-12-05 19:40:20.634106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.254 [2024-12-05 19:40:20.634167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:27.254 [2024-12-05 19:40:20.638074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.254 [2024-12-05 19:40:20.638135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.254 [2024-12-05 19:40:20.638149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:27.254 [2024-12-05 19:40:20.642326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.254 [2024-12-05 19:40:20.642359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.254 [2024-12-05 19:40:20.642388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:27.254 [2024-12-05 19:40:20.646391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.254 [2024-12-05 19:40:20.646425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.254 [2024-12-05 19:40:20.646453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:27.254 [2024-12-05 19:40:20.650367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.254 [2024-12-05 19:40:20.650400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.254 [2024-12-05 19:40:20.650428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:27.254 [2024-12-05 19:40:20.654480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.254 [2024-12-05 19:40:20.654515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.254 [2024-12-05 19:40:20.654544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:27.254 [2024-12-05 19:40:20.658344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.254 [2024-12-05 19:40:20.658377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.254 [2024-12-05 19:40:20.658405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:27.254 [2024-12-05 19:40:20.662360] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.254 [2024-12-05 19:40:20.662394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.254 [2024-12-05 19:40:20.662423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:27.254 [2024-12-05 19:40:20.666247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.254 [2024-12-05 19:40:20.666279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.254 [2024-12-05 19:40:20.666308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:27.254 [2024-12-05 19:40:20.670172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.254 [2024-12-05 19:40:20.670205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.254 [2024-12-05 19:40:20.670233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:27.254 [2024-12-05 19:40:20.674040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.254 [2024-12-05 19:40:20.674073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.254 [2024-12-05 19:40:20.674102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:27.254 [2024-12-05 19:40:20.678131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.254 [2024-12-05 19:40:20.678174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.254 [2024-12-05 19:40:20.678202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:27.254 [2024-12-05 19:40:20.682051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.254 [2024-12-05 19:40:20.682085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.254 [2024-12-05 19:40:20.682113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:27.254 [2024-12-05 19:40:20.685987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.254 [2024-12-05 19:40:20.686021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.254 [2024-12-05 19:40:20.686049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:27.514 [2024-12-05 19:40:20.689912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.514 [2024-12-05 19:40:20.689945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.514 [2024-12-05 19:40:20.689974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:27.514 [2024-12-05 19:40:20.693949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.514 [2024-12-05 19:40:20.693982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.514 [2024-12-05 19:40:20.694011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:27.514 [2024-12-05 19:40:20.697996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.514 [2024-12-05 19:40:20.698030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.514 [2024-12-05 19:40:20.698059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:27.514 [2024-12-05 19:40:20.701993] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.514 [2024-12-05 19:40:20.702027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.514 [2024-12-05 19:40:20.702056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:27.514 [2024-12-05 19:40:20.706036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.514 [2024-12-05 19:40:20.706070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.514 [2024-12-05 19:40:20.706099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:27.514 [2024-12-05 19:40:20.710071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.514 [2024-12-05 19:40:20.710106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.514 [2024-12-05 19:40:20.710143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:27.514 [2024-12-05 19:40:20.714014] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.514 [2024-12-05 19:40:20.714048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.514 [2024-12-05 19:40:20.714077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:27.514 [2024-12-05 19:40:20.717950] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.514 [2024-12-05 19:40:20.717984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.514 [2024-12-05 19:40:20.718013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:27.514 [2024-12-05 19:40:20.722001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.514 [2024-12-05 19:40:20.722035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.514 [2024-12-05 19:40:20.722064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:27.515 [2024-12-05 19:40:20.725923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.515 [2024-12-05 19:40:20.725957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.515 [2024-12-05 19:40:20.725986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:27.515 [2024-12-05 19:40:20.729846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.515 [2024-12-05 19:40:20.729880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.515 [2024-12-05 19:40:20.729909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:27.515 [2024-12-05 19:40:20.733934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.515 [2024-12-05 19:40:20.733969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.515 [2024-12-05 19:40:20.733997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:27.515 [2024-12-05 19:40:20.737942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.515 [2024-12-05 19:40:20.737976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.515 [2024-12-05 19:40:20.738005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:27.515 [2024-12-05 19:40:20.742004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.515 [2024-12-05 19:40:20.742038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.515 [2024-12-05 19:40:20.742068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:27.515 [2024-12-05 19:40:20.746119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.515 [2024-12-05 19:40:20.746162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.515 [2024-12-05 19:40:20.746191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:27.515 [2024-12-05 19:40:20.750066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.515 [2024-12-05 19:40:20.750100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.515 [2024-12-05 19:40:20.750144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:27.515 [2024-12-05 19:40:20.754054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.515 [2024-12-05 19:40:20.754088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.515 [2024-12-05 19:40:20.754117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:27.515 [2024-12-05 19:40:20.758089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.515 [2024-12-05 19:40:20.758133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.515 [2024-12-05 19:40:20.758163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:27.515 [2024-12-05 19:40:20.762093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.515 [2024-12-05 19:40:20.762152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.515 [2024-12-05 19:40:20.762164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:27.515 [2024-12-05 19:40:20.766025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.515 [2024-12-05 19:40:20.766060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.515 [2024-12-05 19:40:20.766089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:27.515 [2024-12-05 19:40:20.769996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.515 [2024-12-05 19:40:20.770030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.515 [2024-12-05 19:40:20.770059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:27.515 [2024-12-05 19:40:20.773993] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.515 [2024-12-05 19:40:20.774027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.515 [2024-12-05 19:40:20.774056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:27.515 [2024-12-05 19:40:20.777974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.515 [2024-12-05 19:40:20.778008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.515 [2024-12-05 19:40:20.778036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:27.515 [2024-12-05 19:40:20.782053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.515 [2024-12-05 19:40:20.782087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.515 [2024-12-05 19:40:20.782115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:27.515 [2024-12-05 19:40:20.786040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.515 [2024-12-05 19:40:20.786074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.515 [2024-12-05 19:40:20.786103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:27.515 [2024-12-05 19:40:20.790066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.515 [2024-12-05 19:40:20.790101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.515 [2024-12-05 19:40:20.790161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:27.515 [2024-12-05 19:40:20.794134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.515 [2024-12-05 19:40:20.794172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.515 [2024-12-05 19:40:20.794200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:27.515 [2024-12-05 19:40:20.798090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.515 [2024-12-05 19:40:20.798148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.515 [2024-12-05 19:40:20.798161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:27.515 [2024-12-05 19:40:20.802120] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.515 [2024-12-05 19:40:20.802178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.515 [2024-12-05 19:40:20.802192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:27.515 [2024-12-05 19:40:20.805991] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.515 [2024-12-05 19:40:20.806026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.515 [2024-12-05 19:40:20.806054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:27.515 [2024-12-05 19:40:20.809988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.515 [2024-12-05 19:40:20.810023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.515 [2024-12-05 19:40:20.810051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:27.515 [2024-12-05 19:40:20.813947] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.515 [2024-12-05 19:40:20.813981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.515 [2024-12-05 19:40:20.814010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:27.515 [2024-12-05 19:40:20.817989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.515 [2024-12-05 19:40:20.818023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.515 [2024-12-05 19:40:20.818051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:27.515 [2024-12-05 19:40:20.822049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.515 [2024-12-05 19:40:20.822084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.515 [2024-12-05 19:40:20.822096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:27.515 [2024-12-05 19:40:20.826003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.515 [2024-12-05 19:40:20.826037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.515 [2024-12-05 19:40:20.826064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:27.515 [2024-12-05 19:40:20.830050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.515 [2024-12-05 19:40:20.830083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.515 [2024-12-05 19:40:20.830112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:27.515 [2024-12-05 19:40:20.834054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.516 [2024-12-05 19:40:20.834089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.516 [2024-12-05 19:40:20.834117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:27.516 [2024-12-05 19:40:20.837979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.516 [2024-12-05 19:40:20.838013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.516 [2024-12-05 19:40:20.838042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:27.516 [2024-12-05 19:40:20.841971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.516 [2024-12-05 19:40:20.842006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.516 [2024-12-05 19:40:20.842034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:27.516 [2024-12-05 19:40:20.846046] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.516 [2024-12-05 19:40:20.846081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.516 [2024-12-05 19:40:20.846110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:27.516 [2024-12-05 19:40:20.850058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.516 [2024-12-05 19:40:20.850092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.516 [2024-12-05 19:40:20.850121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:27.516 [2024-12-05 19:40:20.854102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.516 [2024-12-05 19:40:20.854155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.516 [2024-12-05 19:40:20.854183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:27.516 [2024-12-05 19:40:20.858081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.516 [2024-12-05 19:40:20.858147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.516 [2024-12-05 19:40:20.858176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:27.516 [2024-12-05 19:40:20.862164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.516 [2024-12-05 19:40:20.862196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.516 [2024-12-05 19:40:20.862225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:27.516 [2024-12-05 19:40:20.865963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.516 [2024-12-05 19:40:20.865997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.516 [2024-12-05 19:40:20.866025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:27.516 [2024-12-05 19:40:20.869979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.516 [2024-12-05 19:40:20.870014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.516 [2024-12-05 19:40:20.870043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:27.516 [2024-12-05 19:40:20.874083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.516 [2024-12-05 19:40:20.874140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.516 [2024-12-05 19:40:20.874153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:27.516 [2024-12-05 19:40:20.878019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.516 [2024-12-05 19:40:20.878053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.516 [2024-12-05 19:40:20.878082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:27.516 [2024-12-05 19:40:20.882133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.516 [2024-12-05 19:40:20.882165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.516 [2024-12-05 19:40:20.882194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:27.516 [2024-12-05 19:40:20.886058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.516 [2024-12-05 19:40:20.886092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.516 [2024-12-05 19:40:20.886121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:27.516 [2024-12-05 19:40:20.890147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.516 [2024-12-05 19:40:20.890180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.516 [2024-12-05 19:40:20.890209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:27.516 [2024-12-05 19:40:20.894055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.516 [2024-12-05 19:40:20.894089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.516 [2024-12-05 19:40:20.894117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:27.516 [2024-12-05 19:40:20.898040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.516 [2024-12-05 19:40:20.898074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.516 [2024-12-05 19:40:20.898103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:27.516 [2024-12-05 19:40:20.901978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.516 [2024-12-05 19:40:20.902012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.516 [2024-12-05 19:40:20.902040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:27.516 [2024-12-05 19:40:20.905978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.516 [2024-12-05 19:40:20.906012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.516 [2024-12-05 19:40:20.906040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:27.516 [2024-12-05 19:40:20.909975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.516 [2024-12-05 19:40:20.910009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.516 [2024-12-05 19:40:20.910037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:27.516 [2024-12-05 19:40:20.914147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.516 [2024-12-05 19:40:20.914180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.516 [2024-12-05 19:40:20.914208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:27.516 [2024-12-05 19:40:20.917989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.516 [2024-12-05 19:40:20.918023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.516 [2024-12-05 19:40:20.918051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:27.516 [2024-12-05 19:40:20.921979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.516 [2024-12-05 19:40:20.922013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.516 [2024-12-05 19:40:20.922041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:27.516 [2024-12-05 19:40:20.926039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.516 [2024-12-05 19:40:20.926073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.516 [2024-12-05 19:40:20.926101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:27.516 [2024-12-05 19:40:20.930040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.516 [2024-12-05 19:40:20.930073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.516 [2024-12-05 19:40:20.930102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:27.516 [2024-12-05 19:40:20.934174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.516 [2024-12-05 19:40:20.934208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.516 [2024-12-05 19:40:20.934237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:27.516 [2024-12-05 19:40:20.938250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.516 [2024-12-05 19:40:20.938282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.516 [2024-12-05 19:40:20.938310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:27.516 [2024-12-05 19:40:20.942216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.516 [2024-12-05 19:40:20.942249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.517 [2024-12-05 19:40:20.942278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:27.517 [2024-12-05 19:40:20.946274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.517 [2024-12-05 19:40:20.946307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.517 [2024-12-05 19:40:20.946337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:27.776 [2024-12-05 19:40:20.950393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.776 [2024-12-05 19:40:20.950427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.776 [2024-12-05 19:40:20.950456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:27.776 [2024-12-05 19:40:20.954481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.776 [2024-12-05 19:40:20.954517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.776 [2024-12-05 19:40:20.954545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:27.776 [2024-12-05 19:40:20.958516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.776 [2024-12-05 19:40:20.958550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.776 [2024-12-05 19:40:20.958579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:27.776 [2024-12-05 19:40:20.962609] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.776 [2024-12-05 19:40:20.962643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.776 [2024-12-05 19:40:20.962671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:27.776 [2024-12-05 19:40:20.966620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.776 [2024-12-05 19:40:20.966654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.776 [2024-12-05 19:40:20.966681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:27.776 [2024-12-05 19:40:20.970546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.776 [2024-12-05 19:40:20.970588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.776 [2024-12-05 19:40:20.970617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:27.776 [2024-12-05 19:40:20.974620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.776 [2024-12-05 19:40:20.974654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.776 [2024-12-05 19:40:20.974684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:27.776 [2024-12-05 19:40:20.978664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.776 [2024-12-05 19:40:20.978698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.776 [2024-12-05 19:40:20.978727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:27.776 [2024-12-05 19:40:20.982673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.776 [2024-12-05 19:40:20.982708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.776 [2024-12-05 19:40:20.982736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:27.776 [2024-12-05 19:40:20.986648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.776 [2024-12-05 19:40:20.986682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.776 [2024-12-05 19:40:20.986711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:27.776 [2024-12-05 19:40:20.990716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.776 [2024-12-05 19:40:20.990749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.776 [2024-12-05 19:40:20.990777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:27.776 [2024-12-05 19:40:20.994583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.776 [2024-12-05 19:40:20.994616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.776 [2024-12-05 19:40:20.994644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:27.776 [2024-12-05 19:40:20.998587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.776 [2024-12-05 19:40:20.998621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.776 [2024-12-05 19:40:20.998649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:27.776 [2024-12-05 19:40:21.002627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.776 [2024-12-05 19:40:21.002661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.776 [2024-12-05 19:40:21.002690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:27.776 [2024-12-05 19:40:21.006626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.776 [2024-12-05 19:40:21.006660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.776 [2024-12-05 19:40:21.006689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:27.776 [2024-12-05 19:40:21.010638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.776 [2024-12-05 19:40:21.010672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.776 [2024-12-05 19:40:21.010701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:27.776 [2024-12-05 19:40:21.014673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.777 [2024-12-05 19:40:21.014707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.777 [2024-12-05 19:40:21.014735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:27.777 [2024-12-05 19:40:21.018752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.777 [2024-12-05 19:40:21.018786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.777 [2024-12-05 19:40:21.018814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:27.777 [2024-12-05 19:40:21.022825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.777 [2024-12-05 19:40:21.022859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.777 [2024-12-05 19:40:21.022888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:27.777 [2024-12-05 19:40:21.026946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.777 [2024-12-05 19:40:21.026981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.777 [2024-12-05 19:40:21.027010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:27.777 [2024-12-05 19:40:21.030913] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.777 [2024-12-05 19:40:21.030947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.777 [2024-12-05 19:40:21.030977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:27.777 [2024-12-05 19:40:21.034894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.777 [2024-12-05 19:40:21.034929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.777 [2024-12-05 19:40:21.034958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:27.777 [2024-12-05 19:40:21.038971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.777 [2024-12-05 19:40:21.039006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.777 [2024-12-05 19:40:21.039035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:27.777 [2024-12-05 19:40:21.042870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.777 [2024-12-05 19:40:21.042904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.777 [2024-12-05 19:40:21.042932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:27.777 [2024-12-05 19:40:21.046984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.777 [2024-12-05 19:40:21.047019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.777 [2024-12-05 19:40:21.047047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:27.777 [2024-12-05 19:40:21.051087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.777 [2024-12-05 19:40:21.051129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.777 [2024-12-05 19:40:21.051159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:27.777 [2024-12-05 19:40:21.055078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.777 [2024-12-05 19:40:21.055141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.777 [2024-12-05 19:40:21.055171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:27.777 [2024-12-05 19:40:21.059151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.777 [2024-12-05 19:40:21.059184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.777 [2024-12-05 19:40:21.059212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:27.777 [2024-12-05 19:40:21.063096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.777 [2024-12-05 19:40:21.063147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.777 [2024-12-05 19:40:21.063180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:27.777 [2024-12-05 19:40:21.067057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.777 [2024-12-05 19:40:21.067091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.777 [2024-12-05 19:40:21.067120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:27.777 [2024-12-05 19:40:21.071124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.777 [2024-12-05 19:40:21.071158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.777 [2024-12-05 19:40:21.071187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:27.777 [2024-12-05 19:40:21.075118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.777 [2024-12-05 19:40:21.075163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.777 [2024-12-05 19:40:21.075191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:27.777 [2024-12-05 19:40:21.079135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.777 [2024-12-05 19:40:21.079167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.777 [2024-12-05 19:40:21.079195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:27.777 [2024-12-05 19:40:21.083010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.777 [2024-12-05 19:40:21.083044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.777 [2024-12-05 19:40:21.083073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:27.777 [2024-12-05 19:40:21.086990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.777 [2024-12-05 19:40:21.087025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.777 [2024-12-05 19:40:21.087053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:27.777 [2024-12-05 19:40:21.090981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.777 [2024-12-05 19:40:21.091015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.777 [2024-12-05 19:40:21.091044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:27.777 [2024-12-05 19:40:21.095135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.777 [2024-12-05 19:40:21.095174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.777 [2024-12-05 19:40:21.095187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:27.777 [2024-12-05 19:40:21.099156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.777 [2024-12-05 19:40:21.099195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.777 [2024-12-05 19:40:21.099224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:27.777 [2024-12-05 19:40:21.103107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.777 [2024-12-05 19:40:21.103152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.777 [2024-12-05 19:40:21.103166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:27.777 [2024-12-05 19:40:21.107077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.777 [2024-12-05 19:40:21.107125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.777 [2024-12-05 19:40:21.107156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:27.777 [2024-12-05 19:40:21.111157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.777 [2024-12-05 19:40:21.111191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.777 [2024-12-05 19:40:21.111203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:27.777 [2024-12-05 19:40:21.115146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.777 [2024-12-05 19:40:21.115180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.777 [2024-12-05 19:40:21.115192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:27.777 [2024-12-05 19:40:21.119045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.777 [2024-12-05 19:40:21.119081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.777 [2024-12-05 19:40:21.119093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:27.777 [2024-12-05 19:40:21.123041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.778 [2024-12-05 19:40:21.123076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.778 [2024-12-05 19:40:21.123088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:27.778 [2024-12-05 19:40:21.127131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.778 [2024-12-05 19:40:21.127173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.778 [2024-12-05 19:40:21.127202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:27.778 [2024-12-05 19:40:21.131180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.778 [2024-12-05 19:40:21.131212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.778 [2024-12-05 19:40:21.131242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:27.778 [2024-12-05 19:40:21.135256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.778 [2024-12-05 19:40:21.135289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.778 [2024-12-05 19:40:21.135319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:27.778 [2024-12-05 19:40:21.139234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.778 [2024-12-05 19:40:21.139266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.778 [2024-12-05 19:40:21.139295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:27.778 [2024-12-05 19:40:21.143619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.778 [2024-12-05 19:40:21.143654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.778 [2024-12-05 19:40:21.143683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:27.778 [2024-12-05 19:40:21.147995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.778 [2024-12-05 19:40:21.148033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.778 [2024-12-05 19:40:21.148047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:27.778 [2024-12-05 19:40:21.152402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.778 [2024-12-05 19:40:21.152440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.778 [2024-12-05 19:40:21.152454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:27.778 [2024-12-05 19:40:21.156954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.778 [2024-12-05 19:40:21.156991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.778 [2024-12-05 19:40:21.157020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:27.778 [2024-12-05 19:40:21.161733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.778 [2024-12-05 19:40:21.161767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.778 [2024-12-05 19:40:21.161794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:27.778 [2024-12-05 19:40:21.166387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.778 [2024-12-05 19:40:21.166423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.778 [2024-12-05 19:40:21.166437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:27.778 [2024-12-05 19:40:21.170956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.778 [2024-12-05 19:40:21.170990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.778 [2024-12-05 19:40:21.171018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:27.778 [2024-12-05 19:40:21.175436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.778 [2024-12-05 19:40:21.175607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.778 [2024-12-05 19:40:21.175625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:27.778 [2024-12-05 19:40:21.180094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.778 [2024-12-05 19:40:21.180138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.778 [2024-12-05 19:40:21.180153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:27.778 [2024-12-05 19:40:21.184585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.778 [2024-12-05 19:40:21.184619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.778 [2024-12-05 19:40:21.184647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:27.778 [2024-12-05 19:40:21.189012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.778 [2024-12-05 19:40:21.189048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.778 [2024-12-05 19:40:21.189076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:27.778 [2024-12-05 19:40:21.193518] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.778 [2024-12-05 19:40:21.193556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.778 [2024-12-05 19:40:21.193570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:27.778 [2024-12-05 19:40:21.198046] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.778 [2024-12-05 19:40:21.198084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.778 [2024-12-05 19:40:21.198114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:27.778 [2024-12-05 19:40:21.202466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.778 [2024-12-05 19:40:21.202505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.778 [2024-12-05 19:40:21.202519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:27.778 [2024-12-05 19:40:21.206933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:27.778 [2024-12-05 19:40:21.206968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.778 [2024-12-05 19:40:21.206996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:28.037 [2024-12-05 19:40:21.211137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.037 [2024-12-05 19:40:21.211199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.037 [2024-12-05 19:40:21.211244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:28.037 [2024-12-05 19:40:21.215427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.037 [2024-12-05 19:40:21.215464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.037 [2024-12-05 19:40:21.215494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:28.038 [2024-12-05 19:40:21.219835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.038 [2024-12-05 19:40:21.219870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.038 [2024-12-05 19:40:21.219924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:28.038 [2024-12-05 19:40:21.224208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.038 [2024-12-05 19:40:21.224290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.038 [2024-12-05 19:40:21.224320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:28.038 [2024-12-05 19:40:21.228687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.038 [2024-12-05 19:40:21.228736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.038 [2024-12-05 19:40:21.228764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:28.038 [2024-12-05 19:40:21.233133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.038 [2024-12-05 19:40:21.233199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.038 [2024-12-05 19:40:21.233244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:28.038 [2024-12-05 19:40:21.237699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.038 [2024-12-05 19:40:21.237889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.038 [2024-12-05 19:40:21.237922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:28.038 [2024-12-05 19:40:21.242253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.038 [2024-12-05 19:40:21.242290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.038 [2024-12-05 19:40:21.242320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:28.038 [2024-12-05 19:40:21.246731] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.038 [2024-12-05 19:40:21.246763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.038 [2024-12-05 19:40:21.246776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:28.038 [2024-12-05 19:40:21.251178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.038 [2024-12-05 19:40:21.251216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.038 [2024-12-05 19:40:21.251234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:28.038 [2024-12-05 19:40:21.255564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.038 [2024-12-05 19:40:21.255768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.038 [2024-12-05 19:40:21.255918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:28.038 [2024-12-05 19:40:21.260388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.038 [2024-12-05 19:40:21.260573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.038 [2024-12-05 19:40:21.260773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:28.038 [2024-12-05 19:40:21.265361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.038 [2024-12-05 19:40:21.265534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.038 [2024-12-05 19:40:21.265657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:28.038 7517.00 IOPS, 939.62 MiB/s [2024-12-05T19:40:21.475Z] [2024-12-05 19:40:21.271406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.038 [2024-12-05 19:40:21.271589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.038 [2024-12-05 19:40:21.271816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:28.038 [2024-12-05 19:40:21.276364] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.038 [2024-12-05 19:40:21.276526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.038 [2024-12-05 19:40:21.276652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:28.038 [2024-12-05 19:40:21.280990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.038 [2024-12-05 19:40:21.281027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.038 [2024-12-05 19:40:21.281057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:28.038 [2024-12-05 19:40:21.285589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.038 [2024-12-05 19:40:21.285624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.038 [2024-12-05 19:40:21.285654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:28.038 [2024-12-05 19:40:21.289986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.038 [2024-12-05 19:40:21.290022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.038 [2024-12-05 19:40:21.290051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:28.038 [2024-12-05 19:40:21.294596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.038 [2024-12-05 19:40:21.294764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.038 [2024-12-05 19:40:21.294798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:28.038 [2024-12-05 19:40:21.299092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.038 [2024-12-05 19:40:21.299176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.038 [2024-12-05 19:40:21.299207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:28.038 [2024-12-05 19:40:21.303629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.038 [2024-12-05 19:40:21.303682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.038 [2024-12-05 19:40:21.303710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:28.038 [2024-12-05 19:40:21.308153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.038 [2024-12-05 19:40:21.308187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.038 [2024-12-05 19:40:21.308200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:28.038 [2024-12-05 19:40:21.312319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.038 [2024-12-05 19:40:21.312355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.038 [2024-12-05 19:40:21.312367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:28.038 [2024-12-05 19:40:21.316575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.038 [2024-12-05 19:40:21.316610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.038 [2024-12-05 19:40:21.316639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:28.038 [2024-12-05 19:40:21.320849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.038 [2024-12-05 19:40:21.320883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.038 [2024-12-05 19:40:21.320912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:28.038 [2024-12-05 19:40:21.325291] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.038 [2024-12-05 19:40:21.325327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.038 [2024-12-05 19:40:21.325357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:28.038 [2024-12-05 19:40:21.329604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.038 [2024-12-05 19:40:21.329668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.038 [2024-12-05 19:40:21.329696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:28.038 [2024-12-05 19:40:21.333873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.038 [2024-12-05 19:40:21.333906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.038 [2024-12-05 19:40:21.333935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:28.038 [2024-12-05 19:40:21.338040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.039 [2024-12-05 19:40:21.338105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.039 [2024-12-05 19:40:21.338132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:28.039 [2024-12-05 19:40:21.342211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.039 [2024-12-05 19:40:21.342261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.039 [2024-12-05 19:40:21.342289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:28.039 [2024-12-05 19:40:21.346318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.039 [2024-12-05 19:40:21.346352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.039 [2024-12-05 19:40:21.346381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:28.039 [2024-12-05 19:40:21.350409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.039 [2024-12-05 19:40:21.350444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.039 [2024-12-05 19:40:21.350472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:28.039 [2024-12-05 19:40:21.354579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.039 [2024-12-05 19:40:21.354609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.039 [2024-12-05 19:40:21.354622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:28.039 [2024-12-05 19:40:21.358828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.039 [2024-12-05 19:40:21.358863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.039 [2024-12-05 19:40:21.358891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:28.039 [2024-12-05 19:40:21.362988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.039 [2024-12-05 19:40:21.363024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.039 [2024-12-05 19:40:21.363051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:28.039 [2024-12-05 19:40:21.367312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.039 [2024-12-05 19:40:21.367355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.039 [2024-12-05 19:40:21.367384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:28.039 [2024-12-05 19:40:21.371506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.039 [2024-12-05 19:40:21.371540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.039 [2024-12-05 19:40:21.371569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:28.039 [2024-12-05 19:40:21.375554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.039 [2024-12-05 19:40:21.375588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.039 [2024-12-05 19:40:21.375617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:28.039 [2024-12-05 19:40:21.379703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.039 [2024-12-05 19:40:21.379739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.039 [2024-12-05 19:40:21.379782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:28.039 [2024-12-05 19:40:21.384087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.039 [2024-12-05 19:40:21.384130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.039 [2024-12-05 19:40:21.384143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:28.039 [2024-12-05 19:40:21.388290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.039 [2024-12-05 19:40:21.388325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.039 [2024-12-05 19:40:21.388338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:28.039 [2024-12-05 19:40:21.392464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.039 [2024-12-05 19:40:21.392513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.039 [2024-12-05 19:40:21.392556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:28.039 [2024-12-05 19:40:21.396705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.039 [2024-12-05 19:40:21.396755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.039 [2024-12-05 19:40:21.396768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:28.039 [2024-12-05 19:40:21.401096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.039 [2024-12-05 19:40:21.401164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.039 [2024-12-05 19:40:21.401179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:28.039 [2024-12-05 19:40:21.405679] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.039 [2024-12-05 19:40:21.405718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.039 [2024-12-05 19:40:21.405748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:28.039 [2024-12-05 19:40:21.410065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.039 [2024-12-05 19:40:21.410101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.039 [2024-12-05 19:40:21.410141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:28.039 [2024-12-05 19:40:21.414548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.039 [2024-12-05 19:40:21.414587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.039 [2024-12-05 19:40:21.414600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:28.039 [2024-12-05 19:40:21.418922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.039 [2024-12-05 19:40:21.418956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.039 [2024-12-05 19:40:21.418984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:28.039 [2024-12-05 19:40:21.423390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.039 [2024-12-05 19:40:21.423426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.039 [2024-12-05 19:40:21.423454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:28.039 [2024-12-05 19:40:21.427725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.039 [2024-12-05 19:40:21.427760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.039 [2024-12-05 19:40:21.427788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:28.039 [2024-12-05 19:40:21.432053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.039 [2024-12-05 19:40:21.432092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.039 [2024-12-05 19:40:21.432105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:28.039 [2024-12-05 19:40:21.436599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.039 [2024-12-05 19:40:21.436633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.039 [2024-12-05 19:40:21.436661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:28.039 [2024-12-05 19:40:21.441043] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.039 [2024-12-05 19:40:21.441077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.039 [2024-12-05 19:40:21.441106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:28.039 [2024-12-05 19:40:21.445456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.039 [2024-12-05 19:40:21.445494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.039 [2024-12-05 19:40:21.445508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:28.039 [2024-12-05 19:40:21.449701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.039 [2024-12-05 19:40:21.449748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.039 [2024-12-05 19:40:21.449777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:28.039 [2024-12-05 19:40:21.454004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.039 [2024-12-05 19:40:21.454038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.040 [2024-12-05 19:40:21.454066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:28.040 [2024-12-05 19:40:21.458514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.040 [2024-12-05 19:40:21.458743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.040 [2024-12-05 19:40:21.458777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:28.040 [2024-12-05 19:40:21.463111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.040 [2024-12-05 19:40:21.463169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.040 [2024-12-05 19:40:21.463182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:28.040 [2024-12-05 19:40:21.467693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.040 [2024-12-05 19:40:21.467728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.040 [2024-12-05 19:40:21.467757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:28.040 [2024-12-05 19:40:21.472078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.040 [2024-12-05 19:40:21.472130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.040 [2024-12-05 19:40:21.472145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:28.300 [2024-12-05 19:40:21.476406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.300 [2024-12-05 19:40:21.476443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.300 [2024-12-05 19:40:21.476472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:28.300 [2024-12-05 19:40:21.480729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.300 [2024-12-05 19:40:21.480764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.300 [2024-12-05 19:40:21.480792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:28.300 [2024-12-05 19:40:21.485038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.300 [2024-12-05 19:40:21.485071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.300 [2024-12-05 19:40:21.485099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:28.300 [2024-12-05 19:40:21.489064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.300 [2024-12-05 19:40:21.489097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.300 [2024-12-05 19:40:21.489135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:28.300 [2024-12-05 19:40:21.493124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.300 [2024-12-05 19:40:21.493327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.300 [2024-12-05 19:40:21.493344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:28.300 [2024-12-05 19:40:21.497830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.300 [2024-12-05 19:40:21.497866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.300 [2024-12-05 19:40:21.497879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:28.300 [2024-12-05 19:40:21.502444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.300 [2024-12-05 19:40:21.502481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.300 [2024-12-05 19:40:21.502495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:28.300 [2024-12-05 19:40:21.506887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.300 [2024-12-05 19:40:21.506922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.300 [2024-12-05 19:40:21.506951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:28.300 [2024-12-05 19:40:21.511347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.300 [2024-12-05 19:40:21.511385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.300 [2024-12-05 19:40:21.511399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:28.300 [2024-12-05 19:40:21.515776] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.300 [2024-12-05 19:40:21.515812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.300 [2024-12-05 19:40:21.515824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:28.300 [2024-12-05 19:40:21.520108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.300 [2024-12-05 19:40:21.520152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.300 [2024-12-05 19:40:21.520165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:28.300 [2024-12-05 19:40:21.524509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.300 [2024-12-05 19:40:21.524560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.300 [2024-12-05 19:40:21.524572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:28.300 [2024-12-05 19:40:21.528637] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.300 [2024-12-05 19:40:21.528670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.300 [2024-12-05 19:40:21.528699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:28.300 [2024-12-05 19:40:21.533079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.300 [2024-12-05 19:40:21.533143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.300 [2024-12-05 19:40:21.533173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:28.300 [2024-12-05 19:40:21.537694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.300 [2024-12-05 19:40:21.537729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.300 [2024-12-05 19:40:21.537757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:28.300 [2024-12-05 19:40:21.542089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.300 [2024-12-05 19:40:21.542144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.300 [2024-12-05 19:40:21.542159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:28.300 [2024-12-05 19:40:21.546503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.300 [2024-12-05 19:40:21.546541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.300 [2024-12-05 19:40:21.546555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:28.300 [2024-12-05 19:40:21.550896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.300 [2024-12-05 19:40:21.550933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.300 [2024-12-05 19:40:21.550946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:28.300 [2024-12-05 19:40:21.555248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.300 [2024-12-05 19:40:21.555283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.300 [2024-12-05 19:40:21.555311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:28.300 [2024-12-05 19:40:21.559343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.301 [2024-12-05 19:40:21.559378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.301 [2024-12-05 19:40:21.559390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:28.301 [2024-12-05 19:40:21.563468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.301 [2024-12-05 19:40:21.563502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.301 [2024-12-05 19:40:21.563530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:28.301 [2024-12-05 19:40:21.567546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.301 [2024-12-05 19:40:21.567579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.301 [2024-12-05 19:40:21.567607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:28.301 [2024-12-05 19:40:21.571755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.301 [2024-12-05 19:40:21.571793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.301 [2024-12-05 19:40:21.571806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:28.301 [2024-12-05 19:40:21.576047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.301 [2024-12-05 19:40:21.576085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.301 [2024-12-05 19:40:21.576099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:28.301 [2024-12-05 19:40:21.580349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.301 [2024-12-05 19:40:21.580394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.301 [2024-12-05 19:40:21.580423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:28.301 [2024-12-05 19:40:21.584713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.301 [2024-12-05 19:40:21.584746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.301 [2024-12-05 19:40:21.584774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:28.301 [2024-12-05 19:40:21.588928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.301 [2024-12-05 19:40:21.588962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.301 [2024-12-05 19:40:21.588990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:28.301 [2024-12-05 19:40:21.592913] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.301 [2024-12-05 19:40:21.592948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.301 [2024-12-05 19:40:21.592976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:28.301 [2024-12-05 19:40:21.596934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.301 [2024-12-05 19:40:21.596968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.301 [2024-12-05 19:40:21.596997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:28.301 [2024-12-05 19:40:21.601228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.301 [2024-12-05 19:40:21.601281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.301 [2024-12-05 19:40:21.601295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:28.301 [2024-12-05 19:40:21.605878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.301 [2024-12-05 19:40:21.605916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.301 [2024-12-05 19:40:21.605945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:28.301 [2024-12-05 19:40:21.610525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.301 [2024-12-05 19:40:21.610568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.301 [2024-12-05 19:40:21.610582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:28.301 [2024-12-05 19:40:21.614936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.301 [2024-12-05 19:40:21.615092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.301 [2024-12-05 19:40:21.615123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:28.301 [2024-12-05 19:40:21.619399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.301 [2024-12-05 19:40:21.619439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.301 [2024-12-05 19:40:21.619453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:28.301 [2024-12-05 19:40:21.623774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.301 [2024-12-05 19:40:21.623812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.301 [2024-12-05 19:40:21.623824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:28.301 [2024-12-05 19:40:21.628248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.301 [2024-12-05 19:40:21.628286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.301 [2024-12-05 19:40:21.628300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:28.301 [2024-12-05 19:40:21.632610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.301 [2024-12-05 19:40:21.632647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.301 [2024-12-05 19:40:21.632660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:28.301 [2024-12-05 19:40:21.636972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.301 [2024-12-05 19:40:21.637008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.301 [2024-12-05 19:40:21.637021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:28.301 [2024-12-05 19:40:21.641403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.301 [2024-12-05 19:40:21.641440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.301 [2024-12-05 19:40:21.641454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:28.301 [2024-12-05 19:40:21.645903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.301 [2024-12-05 19:40:21.645941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.301 [2024-12-05 19:40:21.645954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:28.301 [2024-12-05 19:40:21.650076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.301 [2024-12-05 19:40:21.650131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.301 [2024-12-05 19:40:21.650146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:28.301 [2024-12-05 19:40:21.654337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.301 [2024-12-05 19:40:21.654493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.301 [2024-12-05 19:40:21.654510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:28.301 [2024-12-05 19:40:21.658865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.301 [2024-12-05 19:40:21.658901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.301 [2024-12-05 19:40:21.658914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:28.301 [2024-12-05 19:40:21.663416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.301 [2024-12-05 19:40:21.663454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.301 [2024-12-05 19:40:21.663467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:28.301 [2024-12-05 19:40:21.667962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.301 [2024-12-05 19:40:21.668000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.301 [2024-12-05 19:40:21.668013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:28.301 [2024-12-05 19:40:21.672209] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.301 [2024-12-05 19:40:21.672246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.302 [2024-12-05 19:40:21.672268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:28.302 [2024-12-05 19:40:21.676432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.302 [2024-12-05 19:40:21.676469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.302 [2024-12-05 19:40:21.676482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:28.302 [2024-12-05 19:40:21.680822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.302 [2024-12-05 19:40:21.680858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.302 [2024-12-05 19:40:21.680886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:28.302 [2024-12-05 19:40:21.685064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.302 [2024-12-05 19:40:21.685099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.302 [2024-12-05 19:40:21.685158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:28.302 [2024-12-05 19:40:21.689468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.302 [2024-12-05 19:40:21.689681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.302 [2024-12-05 19:40:21.689700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:28.302 [2024-12-05 19:40:21.694404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.302 [2024-12-05 19:40:21.694617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.302 [2024-12-05 19:40:21.694766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:28.302 [2024-12-05 19:40:21.699288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.302 [2024-12-05 19:40:21.699465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.302 [2024-12-05 19:40:21.699590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:28.302 [2024-12-05 19:40:21.704110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.302 [2024-12-05 19:40:21.704296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.302 [2024-12-05 19:40:21.704421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:28.302 [2024-12-05 19:40:21.708946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.302 [2024-12-05 19:40:21.709167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.302 [2024-12-05 19:40:21.709307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:28.302 [2024-12-05 19:40:21.713808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.302 [2024-12-05 19:40:21.714018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.302 [2024-12-05 19:40:21.714249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:28.302 [2024-12-05 19:40:21.718787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.302 [2024-12-05 19:40:21.718981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.302 [2024-12-05 19:40:21.719181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:28.302 [2024-12-05 19:40:21.723706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.302 [2024-12-05 19:40:21.723927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.302 [2024-12-05 19:40:21.724067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:28.302 [2024-12-05 19:40:21.728547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.302 [2024-12-05 19:40:21.728775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.302 [2024-12-05 19:40:21.728894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:28.302 [2024-12-05 19:40:21.733480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.302 [2024-12-05 19:40:21.733703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.302 [2024-12-05 19:40:21.733724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:28.561 [2024-12-05 19:40:21.738088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.561 [2024-12-05 19:40:21.738154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.561 [2024-12-05 19:40:21.738184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:28.561 [2024-12-05 19:40:21.742450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.561 [2024-12-05 19:40:21.742487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.561 [2024-12-05 19:40:21.742501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:28.561 [2024-12-05 19:40:21.746587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.561 [2024-12-05 19:40:21.746638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.561 [2024-12-05 19:40:21.746650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:28.561 [2024-12-05 19:40:21.750832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.561 [2024-12-05 19:40:21.750868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.562 [2024-12-05 19:40:21.750881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:28.562 [2024-12-05 19:40:21.755016] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.562 [2024-12-05 19:40:21.755052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.562 [2024-12-05 19:40:21.755064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:28.562 [2024-12-05 19:40:21.759313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.562 [2024-12-05 19:40:21.759348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.562 [2024-12-05 19:40:21.759361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:28.562 [2024-12-05 19:40:21.763678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.562 [2024-12-05 19:40:21.763713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.562 [2024-12-05 19:40:21.763742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:28.562 [2024-12-05 19:40:21.768009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.562 [2024-12-05 19:40:21.768047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.562 [2024-12-05 19:40:21.768061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:28.562 [2024-12-05 19:40:21.772408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.562 [2024-12-05 19:40:21.772446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.562 [2024-12-05 19:40:21.772461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:28.562 [2024-12-05 19:40:21.776779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.562 [2024-12-05 19:40:21.776814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.562 [2024-12-05 19:40:21.776843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:28.562 [2024-12-05 19:40:21.781167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.562 [2024-12-05 19:40:21.781228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.562 [2024-12-05 19:40:21.781258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:28.562 [2024-12-05 19:40:21.785658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.562 [2024-12-05 19:40:21.785693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.562 [2024-12-05 19:40:21.785721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:28.562 [2024-12-05 19:40:21.790053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.562 [2024-12-05 19:40:21.790087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.562 [2024-12-05 19:40:21.790116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:28.562 [2024-12-05 19:40:21.794376] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.562 [2024-12-05 19:40:21.794411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.562 [2024-12-05 19:40:21.794440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:28.562 [2024-12-05 19:40:21.798651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.562 [2024-12-05 19:40:21.798689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.562 [2024-12-05 19:40:21.798718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:28.562 [2024-12-05 19:40:21.803101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.562 [2024-12-05 19:40:21.803169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.562 [2024-12-05 19:40:21.803198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:28.562 [2024-12-05 19:40:21.807594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.562 [2024-12-05 19:40:21.807630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.562 [2024-12-05 19:40:21.807644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:28.562 [2024-12-05 19:40:21.811973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.562 [2024-12-05 19:40:21.812010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.562 [2024-12-05 19:40:21.812023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:28.562 [2024-12-05 19:40:21.816409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.562 [2024-12-05 19:40:21.816445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.562 [2024-12-05 19:40:21.816458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:28.562 [2024-12-05 19:40:21.820854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.562 [2024-12-05 19:40:21.820891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.562 [2024-12-05 19:40:21.820904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:28.562 [2024-12-05 19:40:21.825330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.562 [2024-12-05 19:40:21.825383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.562 [2024-12-05 19:40:21.825397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:28.562 [2024-12-05 19:40:21.829775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.562 [2024-12-05 19:40:21.829812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.562 [2024-12-05 19:40:21.829825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:28.562 [2024-12-05 19:40:21.834051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.562 [2024-12-05 19:40:21.834086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.562 [2024-12-05 19:40:21.834114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:28.562 [2024-12-05 19:40:21.838393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.562 [2024-12-05 19:40:21.838430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.562 [2024-12-05 19:40:21.838443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:28.562 [2024-12-05 19:40:21.842649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.562 [2024-12-05 19:40:21.842682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.562 [2024-12-05 19:40:21.842710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:28.562 [2024-12-05 19:40:21.846879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.562 [2024-12-05 19:40:21.846912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.562 [2024-12-05 19:40:21.846940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:28.562 [2024-12-05 19:40:21.851077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.562 [2024-12-05 19:40:21.851125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.562 [2024-12-05 19:40:21.851155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:28.562 [2024-12-05 19:40:21.855393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.562 [2024-12-05 19:40:21.855428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.562 [2024-12-05 19:40:21.855441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:28.562 [2024-12-05 19:40:21.859799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.562 [2024-12-05 19:40:21.859834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.562 [2024-12-05 19:40:21.859846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:28.562 [2024-12-05 19:40:21.864263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.562 [2024-12-05 19:40:21.864298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.562 [2024-12-05 19:40:21.864327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:28.562 [2024-12-05 19:40:21.868587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.563 [2024-12-05 19:40:21.868620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.563 [2024-12-05 19:40:21.868648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:28.563 [2024-12-05 19:40:21.872937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.563 [2024-12-05 19:40:21.872972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.563 [2024-12-05 19:40:21.872984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:28.563 [2024-12-05 19:40:21.877387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.563 [2024-12-05 19:40:21.877421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.563 [2024-12-05 19:40:21.877450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:28.563 [2024-12-05 19:40:21.881551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.563 [2024-12-05 19:40:21.881775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.563 [2024-12-05 19:40:21.881809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:28.563 [2024-12-05 19:40:21.885948] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.563 [2024-12-05 19:40:21.885982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.563 [2024-12-05 19:40:21.886012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:28.563 [2024-12-05 19:40:21.890107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.563 [2024-12-05 19:40:21.890169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.563 [2024-12-05 19:40:21.890198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:28.563 [2024-12-05 19:40:21.894004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.563 [2024-12-05 19:40:21.894038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.563 [2024-12-05 19:40:21.894066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:28.563 [2024-12-05 19:40:21.898019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.563 [2024-12-05 19:40:21.898053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.563 [2024-12-05 19:40:21.898080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:28.563 [2024-12-05 19:40:21.901895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.563 [2024-12-05 19:40:21.901928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.563 [2024-12-05 19:40:21.901956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:28.563 [2024-12-05 19:40:21.905861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.563 [2024-12-05 19:40:21.905894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.563 [2024-12-05 19:40:21.905922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:28.563 [2024-12-05 19:40:21.909716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.563 [2024-12-05 19:40:21.909749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.563 [2024-12-05 19:40:21.909777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:28.563 [2024-12-05 19:40:21.913562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.563 [2024-12-05 19:40:21.913596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.563 [2024-12-05 19:40:21.913624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:28.563 [2024-12-05 19:40:21.917531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.563 [2024-12-05 19:40:21.917565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.563 [2024-12-05 19:40:21.917593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:28.563 [2024-12-05 19:40:21.921509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.563 [2024-12-05 19:40:21.921543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.563 [2024-12-05 19:40:21.921572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:28.563 [2024-12-05 19:40:21.925721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.563 [2024-12-05 19:40:21.925755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.563 [2024-12-05 19:40:21.925783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:28.563 [2024-12-05 19:40:21.929895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.563 [2024-12-05 19:40:21.929929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.563 [2024-12-05 19:40:21.929958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:28.563 [2024-12-05 19:40:21.934092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.563 [2024-12-05 19:40:21.934155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.563 [2024-12-05 19:40:21.934185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:28.563 [2024-12-05 19:40:21.938451] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.563 [2024-12-05 19:40:21.938486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.563 [2024-12-05 19:40:21.938515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:28.563 [2024-12-05 19:40:21.942733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.563 [2024-12-05 19:40:21.942767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.563 [2024-12-05 19:40:21.942795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:28.563 [2024-12-05 19:40:21.947006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.563 [2024-12-05 19:40:21.947041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.563 [2024-12-05 19:40:21.947069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:28.563 [2024-12-05 19:40:21.951346] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.563 [2024-12-05 19:40:21.951382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.563 [2024-12-05 19:40:21.951411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:28.563 [2024-12-05 19:40:21.955640] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.563 [2024-12-05 19:40:21.955674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.563 [2024-12-05 19:40:21.955686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:28.563 [2024-12-05 19:40:21.960040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.563 [2024-12-05 19:40:21.960078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.563 [2024-12-05 19:40:21.960091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:28.563 [2024-12-05 19:40:21.964289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.563 [2024-12-05 19:40:21.964323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.563 [2024-12-05 19:40:21.964336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:28.563 [2024-12-05 19:40:21.968256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.563 [2024-12-05 19:40:21.968291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.563 [2024-12-05 19:40:21.968303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:28.563 [2024-12-05 19:40:21.972224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.563 [2024-12-05 19:40:21.972259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.563 [2024-12-05 19:40:21.972271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:28.563 [2024-12-05 19:40:21.976334] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.563 [2024-12-05 19:40:21.976370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.564 [2024-12-05 19:40:21.976383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:28.564 [2024-12-05 19:40:21.980392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.564 [2024-12-05 19:40:21.980428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.564 [2024-12-05 19:40:21.980441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:28.564 [2024-12-05 19:40:21.984547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.564 [2024-12-05 19:40:21.984581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.564 [2024-12-05 19:40:21.984609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:28.564 [2024-12-05 19:40:21.988676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.564 [2024-12-05 19:40:21.988710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.564 [2024-12-05 19:40:21.988738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:28.564 [2024-12-05 19:40:21.992927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.564 [2024-12-05 19:40:21.992959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.564 [2024-12-05 19:40:21.992988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:28.823 [2024-12-05 19:40:21.997273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.823 [2024-12-05 19:40:21.997307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.823 [2024-12-05 19:40:21.997335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:28.823 [2024-12-05 19:40:22.001397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.823 [2024-12-05 19:40:22.001431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.823 [2024-12-05 19:40:22.001460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:28.823 [2024-12-05 19:40:22.005424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.823 [2024-12-05 19:40:22.005458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.823 [2024-12-05 19:40:22.005487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:28.823 [2024-12-05 19:40:22.009683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.823 [2024-12-05 19:40:22.009717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.823 [2024-12-05 19:40:22.009740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:28.823 [2024-12-05 19:40:22.013965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.823 [2024-12-05 19:40:22.014000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.823 [2024-12-05 19:40:22.014024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:28.823 [2024-12-05 19:40:22.018195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.823 [2024-12-05 19:40:22.018260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.823 [2024-12-05 19:40:22.018290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:28.823 [2024-12-05 19:40:22.022676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.823 [2024-12-05 19:40:22.022710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.823 [2024-12-05 19:40:22.022738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:28.823 [2024-12-05 19:40:22.027061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.823 [2024-12-05 19:40:22.027143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.823 [2024-12-05 19:40:22.027173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:28.823 [2024-12-05 19:40:22.031508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.823 [2024-12-05 19:40:22.031752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.823 [2024-12-05 19:40:22.031786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:28.823 [2024-12-05 19:40:22.036022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.823 [2024-12-05 19:40:22.036060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.824 [2024-12-05 19:40:22.036072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:28.824 [2024-12-05 19:40:22.040492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.824 [2024-12-05 19:40:22.040570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.824 [2024-12-05 19:40:22.040598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:28.824 [2024-12-05 19:40:22.044972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.824 [2024-12-05 19:40:22.045008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.824 [2024-12-05 19:40:22.045037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:28.824 [2024-12-05 19:40:22.049482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.824 [2024-12-05 19:40:22.049520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.824 [2024-12-05 19:40:22.049534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:28.824 [2024-12-05 19:40:22.053834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.824 [2024-12-05 19:40:22.053868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.824 [2024-12-05 19:40:22.053896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:28.824 [2024-12-05 19:40:22.058027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.824 [2024-12-05 19:40:22.058062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.824 [2024-12-05 19:40:22.058090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:28.824 [2024-12-05 19:40:22.062395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.824 [2024-12-05 19:40:22.062432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.824 [2024-12-05 19:40:22.062476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:28.824 [2024-12-05 19:40:22.066637] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.824 [2024-12-05 19:40:22.066670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.824 [2024-12-05 19:40:22.066698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:28.824 [2024-12-05 19:40:22.070975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.824 [2024-12-05 19:40:22.071012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.824 [2024-12-05 19:40:22.071026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:28.824 [2024-12-05 19:40:22.075439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.824 [2024-12-05 19:40:22.075476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.824 [2024-12-05 19:40:22.075490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:28.824 [2024-12-05 19:40:22.079847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.824 [2024-12-05 19:40:22.079903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.824 [2024-12-05 19:40:22.079934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:28.824 [2024-12-05 19:40:22.084138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.824 [2024-12-05 19:40:22.084189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.824 [2024-12-05 19:40:22.084218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:28.824 [2024-12-05 19:40:22.088121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.824 [2024-12-05 19:40:22.088164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.824 [2024-12-05 19:40:22.088177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:28.824 [2024-12-05 19:40:22.092175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.824 [2024-12-05 19:40:22.092225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.824 [2024-12-05 19:40:22.092237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:28.824 [2024-12-05 19:40:22.096237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.824 [2024-12-05 19:40:22.096285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.824 [2024-12-05 19:40:22.096313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:28.824 [2024-12-05 19:40:22.100314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.824 [2024-12-05 19:40:22.100346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.824 [2024-12-05 19:40:22.100374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:28.824 [2024-12-05 19:40:22.104428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.824 [2024-12-05 19:40:22.104462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.824 [2024-12-05 19:40:22.104490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:28.824 [2024-12-05 19:40:22.108440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.824 [2024-12-05 19:40:22.108473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.824 [2024-12-05 19:40:22.108502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:28.824 [2024-12-05 19:40:22.112379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.824 [2024-12-05 19:40:22.112413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.824 [2024-12-05 19:40:22.112441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:28.824 [2024-12-05 19:40:22.116421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.824 [2024-12-05 19:40:22.116455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.824 [2024-12-05 19:40:22.116484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:28.824 [2024-12-05 19:40:22.120335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.824 [2024-12-05 19:40:22.120368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.824 [2024-12-05 19:40:22.120396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:28.825 [2024-12-05 19:40:22.124307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.825 [2024-12-05 19:40:22.124339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.825 [2024-12-05 19:40:22.124368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:28.825 [2024-12-05 19:40:22.128347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.825 [2024-12-05 19:40:22.128382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.825 [2024-12-05 19:40:22.128394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:28.825 [2024-12-05 19:40:22.132342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.825 [2024-12-05 19:40:22.132377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.825 [2024-12-05 19:40:22.132389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:28.825 [2024-12-05 19:40:22.136779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.825 [2024-12-05 19:40:22.136829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.825 [2024-12-05 19:40:22.136842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:28.825 [2024-12-05 19:40:22.140975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.825 [2024-12-05 19:40:22.141013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.825 [2024-12-05 19:40:22.141027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:28.825 [2024-12-05 19:40:22.145171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.825 [2024-12-05 19:40:22.145209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.825 [2024-12-05 19:40:22.145223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:28.825 [2024-12-05 19:40:22.149632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.825 [2024-12-05 19:40:22.149666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.825 [2024-12-05 19:40:22.149695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:28.825 [2024-12-05 19:40:22.153790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.825 [2024-12-05 19:40:22.153824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.825 [2024-12-05 19:40:22.153836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:28.825 [2024-12-05 19:40:22.157740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.825 [2024-12-05 19:40:22.157791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.825 [2024-12-05 19:40:22.157803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:28.825 [2024-12-05 19:40:22.161806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.825 [2024-12-05 19:40:22.161842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.825 [2024-12-05 19:40:22.161854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:28.825 [2024-12-05 19:40:22.165787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.825 [2024-12-05 19:40:22.165822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.825 [2024-12-05 19:40:22.165834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:28.825 [2024-12-05 19:40:22.170038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.825 [2024-12-05 19:40:22.170075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.825 [2024-12-05 19:40:22.170088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:28.825 [2024-12-05 19:40:22.174480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.825 [2024-12-05 19:40:22.174518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.825 [2024-12-05 19:40:22.174531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:28.825 [2024-12-05 19:40:22.178879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.825 [2024-12-05 19:40:22.178913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.825 [2024-12-05 19:40:22.178941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:28.825 [2024-12-05 19:40:22.183238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.825 [2024-12-05 19:40:22.183275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.825 [2024-12-05 19:40:22.183288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:28.825 [2024-12-05 19:40:22.187694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.825 [2024-12-05 19:40:22.187746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.825 [2024-12-05 19:40:22.187775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:28.825 [2024-12-05 19:40:22.192093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.825 [2024-12-05 19:40:22.192146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.825 [2024-12-05 19:40:22.192161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:28.825 [2024-12-05 19:40:22.196447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.825 [2024-12-05 19:40:22.196486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.825 [2024-12-05 19:40:22.196499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:28.825 [2024-12-05 19:40:22.200802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.825 [2024-12-05 19:40:22.200837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.825 [2024-12-05 19:40:22.200866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:28.825 [2024-12-05 19:40:22.205262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.825 [2024-12-05 19:40:22.205298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.825 [2024-12-05 19:40:22.205327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:28.825 [2024-12-05 19:40:22.209759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.825 [2024-12-05 19:40:22.209794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.825 [2024-12-05 19:40:22.209822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:28.825 [2024-12-05 19:40:22.214121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.825 [2024-12-05 19:40:22.214183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.826 [2024-12-05 19:40:22.214213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:28.826 [2024-12-05 19:40:22.218294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.826 [2024-12-05 19:40:22.218328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.826 [2024-12-05 19:40:22.218357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:28.826 [2024-12-05 19:40:22.222247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.826 [2024-12-05 19:40:22.222280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.826 [2024-12-05 19:40:22.222309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:28.826 [2024-12-05 19:40:22.226247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.826 [2024-12-05 19:40:22.226281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.826 [2024-12-05 19:40:22.226310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:28.826 [2024-12-05 19:40:22.230267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.826 [2024-12-05 19:40:22.230301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.826 [2024-12-05 19:40:22.230329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:28.826 [2024-12-05 19:40:22.234206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.826 [2024-12-05 19:40:22.234239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.826 [2024-12-05 19:40:22.234267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:28.826 [2024-12-05 19:40:22.238104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.826 [2024-12-05 19:40:22.238170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.826 [2024-12-05 19:40:22.238183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:28.826 [2024-12-05 19:40:22.242095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.826 [2024-12-05 19:40:22.242156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.826 [2024-12-05 19:40:22.242168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:28.826 [2024-12-05 19:40:22.246055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.826 [2024-12-05 19:40:22.246089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.826 [2024-12-05 19:40:22.246118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:28.826 [2024-12-05 19:40:22.250064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.826 [2024-12-05 19:40:22.250098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.826 [2024-12-05 19:40:22.250142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:28.826 [2024-12-05 19:40:22.254033] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.826 [2024-12-05 19:40:22.254067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.826 [2024-12-05 19:40:22.254096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:28.826 [2024-12-05 19:40:22.257905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:28.826 [2024-12-05 19:40:22.257939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.826 [2024-12-05 19:40:22.257967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:29.084 [2024-12-05 19:40:22.261987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:29.084 [2024-12-05 19:40:22.262023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.084 [2024-12-05 19:40:22.262051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:29.084 [2024-12-05 19:40:22.266028] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe129b0) 00:20:29.084 [2024-12-05 19:40:22.266062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.084 [2024-12-05 19:40:22.266098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:29.084 7354.50 IOPS, 919.31 MiB/s 00:20:29.084 Latency(us) 00:20:29.084 [2024-12-05T19:40:22.521Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:29.084 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:20:29.084 nvme0n1 : 2.00 7351.27 918.91 0.00 0.00 2173.08 1757.56 9353.77 00:20:29.084 [2024-12-05T19:40:22.521Z] =================================================================================================================== 00:20:29.084 [2024-12-05T19:40:22.521Z] Total : 7351.27 918.91 0.00 0.00 2173.08 1757.56 9353.77 00:20:29.084 { 00:20:29.084 "results": [ 00:20:29.084 { 00:20:29.084 "job": "nvme0n1", 00:20:29.084 "core_mask": "0x2", 00:20:29.084 "workload": "randread", 00:20:29.084 "status": "finished", 00:20:29.084 "queue_depth": 16, 00:20:29.084 "io_size": 131072, 00:20:29.084 "runtime": 2.003056, 00:20:29.084 "iops": 7351.267263621186, 00:20:29.084 "mibps": 918.9084079526483, 00:20:29.084 "io_failed": 0, 00:20:29.084 "io_timeout": 0, 00:20:29.084 "avg_latency_us": 2173.0762500385863, 00:20:29.084 "min_latency_us": 1757.5563636363636, 00:20:29.084 "max_latency_us": 9353.774545454546 00:20:29.084 } 00:20:29.084 ], 00:20:29.084 "core_count": 1 00:20:29.084 } 00:20:29.084 19:40:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:20:29.084 19:40:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:20:29.084 19:40:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:20:29.084 19:40:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:20:29.084 | .driver_specific 00:20:29.084 | .nvme_error 00:20:29.084 | .status_code 00:20:29.084 | .command_transient_transport_error' 00:20:29.342 19:40:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 475 > 0 )) 00:20:29.342 19:40:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80290 00:20:29.342 19:40:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 80290 ']' 00:20:29.342 19:40:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 80290 00:20:29.342 19:40:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:20:29.342 19:40:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:29.342 19:40:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80290 00:20:29.342 killing process with pid 80290 00:20:29.342 Received shutdown signal, test time was about 2.000000 seconds 00:20:29.342 00:20:29.342 Latency(us) 00:20:29.342 [2024-12-05T19:40:22.779Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:29.342 [2024-12-05T19:40:22.779Z] =================================================================================================================== 00:20:29.342 [2024-12-05T19:40:22.779Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:29.342 19:40:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:29.342 19:40:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:29.342 19:40:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80290' 00:20:29.342 19:40:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 80290 00:20:29.342 19:40:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 80290 00:20:29.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:29.599 19:40:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:20:29.600 19:40:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:20:29.600 19:40:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:20:29.600 19:40:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:20:29.600 19:40:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:20:29.600 19:40:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80352 00:20:29.600 19:40:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:20:29.600 19:40:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80352 /var/tmp/bperf.sock 00:20:29.600 19:40:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 80352 ']' 00:20:29.600 19:40:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:29.600 19:40:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:29.600 19:40:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:29.600 19:40:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:29.600 19:40:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:29.600 [2024-12-05 19:40:22.854687] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:20:29.600 [2024-12-05 19:40:22.855233] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80352 ] 00:20:29.600 [2024-12-05 19:40:22.997226] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:29.923 [2024-12-05 19:40:23.056746] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:29.923 [2024-12-05 19:40:23.110452] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:30.491 19:40:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:30.491 19:40:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:20:30.491 19:40:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:30.491 19:40:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:30.749 19:40:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:20:30.749 19:40:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.749 19:40:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:30.749 19:40:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.749 19:40:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:30.749 19:40:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:31.008 nvme0n1 00:20:31.266 19:40:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:20:31.266 19:40:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.266 19:40:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:31.266 19:40:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.266 19:40:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:20:31.266 19:40:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:31.266 Running I/O for 2 seconds... 00:20:31.266 [2024-12-05 19:40:24.566574] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4c20) with pdu=0x200016ef7100 00:20:31.266 [2024-12-05 19:40:24.568258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:16698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.266 [2024-12-05 19:40:24.568482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:31.266 [2024-12-05 19:40:24.581851] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4c20) with pdu=0x200016ef7970 00:20:31.266 [2024-12-05 19:40:24.583434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:12599 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.266 [2024-12-05 19:40:24.583501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.266 [2024-12-05 19:40:24.597034] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4c20) with pdu=0x200016ef81e0 00:20:31.266 [2024-12-05 19:40:24.598703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:8264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.266 [2024-12-05 19:40:24.598732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:31.267 [2024-12-05 19:40:24.612007] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4c20) with pdu=0x200016ef8a50 00:20:31.267 [2024-12-05 19:40:24.613676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:25217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.267 [2024-12-05 19:40:24.613707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:20:31.267 [2024-12-05 19:40:24.627145] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4c20) with pdu=0x200016ef92c0 00:20:31.267 [2024-12-05 19:40:24.628744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:8070 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.267 [2024-12-05 19:40:24.628919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:20:31.267 [2024-12-05 19:40:24.642484] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4c20) with pdu=0x200016ef9b30 00:20:31.267 [2024-12-05 19:40:24.643890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:17743 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.267 [2024-12-05 19:40:24.643942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:20:31.267 [2024-12-05 19:40:24.657271] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4c20) with pdu=0x200016efa3a0 00:20:31.267 [2024-12-05 19:40:24.658888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:12090 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.267 [2024-12-05 19:40:24.658915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:20:31.267 [2024-12-05 19:40:24.672348] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4c20) with pdu=0x200016efac10 00:20:31.267 [2024-12-05 19:40:24.674060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:25127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.267 [2024-12-05 19:40:24.674132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:20:31.267 [2024-12-05 19:40:24.687541] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4c20) with pdu=0x200016efb480 00:20:31.267 [2024-12-05 19:40:24.689076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6555 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.267 [2024-12-05 19:40:24.689104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:20:31.267 [2024-12-05 19:40:24.702389] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4c20) with pdu=0x200016efbcf0 00:20:31.526 [2024-12-05 19:40:24.703766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:23736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.526 [2024-12-05 19:40:24.703976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:31.526 [2024-12-05 19:40:24.717672] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4c20) with pdu=0x200016efc560 00:20:31.526 [2024-12-05 19:40:24.718987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:16636 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.526 [2024-12-05 19:40:24.719020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:20:31.526 [2024-12-05 19:40:24.732540] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4c20) with pdu=0x200016efcdd0 00:20:31.526 [2024-12-05 19:40:24.734193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24458 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.526 [2024-12-05 19:40:24.734232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:20:31.526 [2024-12-05 19:40:24.747752] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4c20) with pdu=0x200016efd640 00:20:31.526 [2024-12-05 19:40:24.749283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17667 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.526 [2024-12-05 19:40:24.749310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:20:31.526 [2024-12-05 19:40:24.762855] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4c20) with pdu=0x200016efdeb0 00:20:31.526 [2024-12-05 19:40:24.764208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16041 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.526 [2024-12-05 19:40:24.764416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:20:31.526 [2024-12-05 19:40:24.778069] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4c20) with pdu=0x200016efe720 00:20:31.526 [2024-12-05 19:40:24.779405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.526 [2024-12-05 19:40:24.779438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:31.526 [2024-12-05 19:40:24.793197] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4c20) with pdu=0x200016eff3c8 00:20:31.526 [2024-12-05 19:40:24.794520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.526 [2024-12-05 19:40:24.794552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:20:31.526 [2024-12-05 19:40:24.814195] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4c20) with pdu=0x200016eff3c8 00:20:31.526 [2024-12-05 19:40:24.816616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.526 [2024-12-05 19:40:24.816823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:31.526 [2024-12-05 19:40:24.829559] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4c20) with pdu=0x200016efe720 00:20:31.526 [2024-12-05 19:40:24.832015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14021 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.526 [2024-12-05 19:40:24.832050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:20:31.526 [2024-12-05 19:40:24.844640] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4c20) with pdu=0x200016efdeb0 00:20:31.526 [2024-12-05 19:40:24.847297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:1510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.526 [2024-12-05 19:40:24.847330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:20:31.526 [2024-12-05 19:40:24.859773] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4c20) with pdu=0x200016efd640 00:20:31.526 [2024-12-05 19:40:24.862187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:8624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.526 [2024-12-05 19:40:24.862221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:20:31.526 [2024-12-05 19:40:24.874875] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4c20) with pdu=0x200016efcdd0 00:20:31.526 [2024-12-05 19:40:24.877388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:22687 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.526 [2024-12-05 19:40:24.877420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:20:31.526 [2024-12-05 19:40:24.890228] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4c20) with pdu=0x200016efc560 00:20:31.526 [2024-12-05 19:40:24.892779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:17187 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.526 [2024-12-05 19:40:24.892975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:20:31.526 [2024-12-05 19:40:24.907091] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4c20) with pdu=0x200016efbcf0 00:20:31.526 [2024-12-05 19:40:24.909678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:16736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.526 [2024-12-05 19:40:24.909744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:31.526 [2024-12-05 19:40:24.923602] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4c20) with pdu=0x200016efb480 00:20:31.526 [2024-12-05 19:40:24.925978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:17437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.526 [2024-12-05 19:40:24.926013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:20:31.526 [2024-12-05 19:40:24.939239] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4c20) with pdu=0x200016efac10 00:20:31.526 [2024-12-05 19:40:24.941675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20632 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.526 [2024-12-05 19:40:24.941710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:20:31.526 [2024-12-05 19:40:24.955170] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4c20) with pdu=0x200016efa3a0 00:20:31.526 [2024-12-05 19:40:24.957658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:3655 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.526 [2024-12-05 19:40:24.957691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:20:31.789 [2024-12-05 19:40:24.970930] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4c20) with pdu=0x200016ef9b30 00:20:31.789 [2024-12-05 19:40:24.973386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:11345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.789 [2024-12-05 19:40:24.973418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:20:31.789 [2024-12-05 19:40:24.986405] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4c20) with pdu=0x200016ef92c0 00:20:31.789 [2024-12-05 19:40:24.988720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:8094 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.789 [2024-12-05 19:40:24.988915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:20:31.789 [2024-12-05 19:40:25.002083] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4c20) with pdu=0x200016ef8a50 00:20:31.789 [2024-12-05 19:40:25.004457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:7529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.789 [2024-12-05 19:40:25.004492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:20:31.789 [2024-12-05 19:40:25.017976] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4c20) with pdu=0x200016ef81e0 00:20:31.789 [2024-12-05 19:40:25.020266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:5416 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.789 [2024-12-05 19:40:25.020301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:20:31.789 [2024-12-05 19:40:25.033873] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4c20) with pdu=0x200016ef7970 00:20:31.789 [2024-12-05 19:40:25.036315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.789 [2024-12-05 19:40:25.036347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:20:31.789 [2024-12-05 19:40:25.050235] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4c20) with pdu=0x200016ef7100 00:20:31.789 [2024-12-05 19:40:25.052571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.789 [2024-12-05 19:40:25.052606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:20:31.789 [2024-12-05 19:40:25.066377] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4c20) with pdu=0x200016ef6890 00:20:31.789 [2024-12-05 19:40:25.068739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.789 [2024-12-05 19:40:25.068788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:31.789 [2024-12-05 19:40:25.082577] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4c20) with pdu=0x200016ef6020 00:20:31.789 [2024-12-05 19:40:25.084871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:6443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.789 [2024-12-05 19:40:25.084904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:20:31.789 [2024-12-05 19:40:25.097759] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4c20) with pdu=0x200016ef57b0 00:20:31.789 [2024-12-05 19:40:25.099873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:15497 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.789 [2024-12-05 19:40:25.099930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:20:31.789 [2024-12-05 19:40:25.112814] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4c20) with pdu=0x200016ef4f40 00:20:31.789 [2024-12-05 19:40:25.115070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:2817 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.789 [2024-12-05 19:40:25.115097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:20:31.789 [2024-12-05 19:40:25.128000] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4c20) with pdu=0x200016ef46d0 00:20:31.789 [2024-12-05 19:40:25.130048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:22891 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.789 [2024-12-05 19:40:25.130079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:20:31.789 [2024-12-05 19:40:25.142926] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4c20) with pdu=0x200016ef3e60 00:20:31.789 [2024-12-05 19:40:25.145361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:10789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.789 [2024-12-05 19:40:25.145393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:20:31.789 [2024-12-05 19:40:25.158279] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4c20) with pdu=0x200016ef35f0 00:20:31.789 [2024-12-05 19:40:25.160397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:6565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.789 [2024-12-05 19:40:25.160443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:31.789 [2024-12-05 19:40:25.173425] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4c20) with pdu=0x200016ef2d80 00:20:31.789 [2024-12-05 19:40:25.175465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:13326 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.789 [2024-12-05 19:40:25.175497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:20:31.789 [2024-12-05 19:40:25.188529] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4c20) with pdu=0x200016ef2510 00:20:31.789 [2024-12-05 19:40:25.190662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:12254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.789 [2024-12-05 19:40:25.190690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:20:31.789 [2024-12-05 19:40:25.204021] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4c20) with pdu=0x200016ef1ca0 00:20:31.789 [2024-12-05 19:40:25.206138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:7260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.789 [2024-12-05 19:40:25.206180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:31.789 [2024-12-05 19:40:25.219510] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4c20) with pdu=0x200016ef1430 00:20:31.789 [2024-12-05 19:40:25.221527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:4572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.789 [2024-12-05 19:40:25.221560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:32.047 [2024-12-05 19:40:25.234982] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4c20) with pdu=0x200016ef0bc0 00:20:32.048 [2024-12-05 19:40:25.237341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:1158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:32.048 [2024-12-05 19:40:25.237374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:20:32.048 [2024-12-05 19:40:25.250936] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4c20) with pdu=0x200016ef0350 00:20:32.048 [2024-12-05 19:40:25.253030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:32.048 [2024-12-05 19:40:25.253296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:20:32.048 [2024-12-05 19:40:25.267972] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4c20) with pdu=0x200016eefae0 00:20:32.048 [2024-12-05 19:40:25.269941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:6791 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:32.048 [2024-12-05 19:40:25.269973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:32.048 [2024-12-05 19:40:25.284040] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4c20) with pdu=0x200016eef270 00:20:32.048 [2024-12-05 19:40:25.286152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:8688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:32.048 [2024-12-05 19:40:25.286190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:32.048 [2024-12-05 19:40:25.299332] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4c20) with pdu=0x200016eeea00 00:20:32.048 [2024-12-05 19:40:25.301332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:7828 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:32.048 [2024-12-05 19:40:25.301365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:32.048 [2024-12-05 19:40:25.315150] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4c20) with pdu=0x200016eee190 00:20:32.048 [2024-12-05 19:40:25.316999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:24795 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:32.048 [2024-12-05 19:40:25.317187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:32.048 [2024-12-05 19:40:25.331203] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4c20) with pdu=0x200016eed920 00:20:32.048 [2024-12-05 19:40:25.333122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:1993 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:32.048 [2024-12-05 19:40:25.333352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:20:32.048 [2024-12-05 19:40:25.347062] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4c20) with pdu=0x200016eed0b0 00:20:32.048 [2024-12-05 19:40:25.348986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:9875 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:32.048 [2024-12-05 19:40:25.349166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:20:32.048 [2024-12-05 19:40:25.363827] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4c20) with pdu=0x200016eec840 00:20:32.048 [2024-12-05 19:40:25.365739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:22487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:32.048 [2024-12-05 19:40:25.365782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:20:32.048 [2024-12-05 19:40:25.380519] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4c20) with pdu=0x200016eebfd0 00:20:32.048 [2024-12-05 19:40:25.382649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:20039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:32.048 [2024-12-05 19:40:25.382682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:20:32.048 [2024-12-05 19:40:25.396971] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4c20) with pdu=0x200016eeb760 00:20:32.048 [2024-12-05 19:40:25.398883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:20920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:32.048 [2024-12-05 19:40:25.398915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:20:32.048 [2024-12-05 19:40:25.412766] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4c20) with pdu=0x200016eeaef0 00:20:32.048 [2024-12-05 19:40:25.414642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:15975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:32.048 [2024-12-05 19:40:25.414688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:32.048 [2024-12-05 19:40:25.428654] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4c20) with pdu=0x200016eea680 00:20:32.048 [2024-12-05 19:40:25.430436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:16220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:32.048 [2024-12-05 19:40:25.430466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:20:32.048 [2024-12-05 19:40:25.444234] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4c20) with pdu=0x200016ee9e10 00:20:32.048 [2024-12-05 19:40:25.445931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:22479 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:32.048 [2024-12-05 19:40:25.445960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:20:32.048 [2024-12-05 19:40:25.460216] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4c20) with pdu=0x200016ee95a0 00:20:32.048 [2024-12-05 19:40:25.461950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:23131 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:32.048 [2024-12-05 19:40:25.461998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:20:32.048 [2024-12-05 19:40:25.476203] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4c20) with pdu=0x200016ee8d30 00:20:32.048 [2024-12-05 19:40:25.477868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:7164 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:32.048 [2024-12-05 19:40:25.477915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:20:32.307 [2024-12-05 19:40:25.492457] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4c20) with pdu=0x200016ee84c0 00:20:32.307 [2024-12-05 19:40:25.494113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:20386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:32.307 [2024-12-05 19:40:25.494198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:20:32.307 [2024-12-05 19:40:25.508478] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4c20) with pdu=0x200016ee7c50 00:20:32.307 [2024-12-05 19:40:25.510214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:22074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:32.307 [2024-12-05 19:40:25.510278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:20:32.307 [2024-12-05 19:40:25.524437] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4c20) with pdu=0x200016ee73e0 00:20:32.307 [2024-12-05 19:40:25.526085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:20006 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:32.307 [2024-12-05 19:40:25.526141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:20:32.307 [2024-12-05 19:40:25.540164] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4c20) with pdu=0x200016ee6b70 00:20:32.307 [2024-12-05 19:40:25.541850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:13269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:32.307 [2024-12-05 19:40:25.541894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:20:32.307 16067.00 IOPS, 62.76 MiB/s [2024-12-05T19:40:25.744Z] [2024-12-05 19:40:25.557192] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4c20) with pdu=0x200016ee6300 00:20:32.307 [2024-12-05 19:40:25.558956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:22675 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:32.307 [2024-12-05 19:40:25.559002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:20:32.307 [2024-12-05 19:40:25.573077] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4c20) with pdu=0x200016ee5a90 00:20:32.307 [2024-12-05 19:40:25.574797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:5957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:32.307 [2024-12-05 19:40:25.574844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.307 [2024-12-05 19:40:25.589061] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4c20) with pdu=0x200016ee5220 00:20:32.307 [2024-12-05 19:40:25.590745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:23038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:32.307 [2024-12-05 19:40:25.590807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:20:32.307 [2024-12-05 19:40:25.604788] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4c20) with pdu=0x200016ee49b0 00:20:32.307 [2024-12-05 19:40:25.606419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:21091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:32.307 [2024-12-05 19:40:25.606449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:32.307 [2024-12-05 19:40:25.620249] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4c20) with pdu=0x200016ee4140 00:20:32.307 [2024-12-05 19:40:25.621791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:21904 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:32.307 [2024-12-05 19:40:25.621835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:20:32.307 [2024-12-05 19:40:25.635496] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4c20) with pdu=0x200016ee38d0 00:20:32.307 [2024-12-05 19:40:25.637154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:21398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:32.307 [2024-12-05 19:40:25.637223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:20:32.307 [2024-12-05 19:40:25.650989] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4c20) with pdu=0x200016ee3060 00:20:32.307 [2024-12-05 19:40:25.652632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:20795 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:32.307 [2024-12-05 19:40:25.652677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:20:32.307 [2024-12-05 19:40:25.666581] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4c20) with pdu=0x200016ee27f0 00:20:32.307 [2024-12-05 19:40:25.668188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:519 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:32.307 [2024-12-05 19:40:25.668220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:32.307 [2024-12-05 19:40:25.682390] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4c20) with pdu=0x200016ee1f80 00:20:32.307 [2024-12-05 19:40:25.683838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:3942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:32.307 [2024-12-05 19:40:25.683890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:20:32.307 [2024-12-05 19:40:25.697941] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4c20) with pdu=0x200016ee1710 00:20:32.307 [2024-12-05 19:40:25.699447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:6345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:32.307 [2024-12-05 19:40:25.699478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:20:32.307 [2024-12-05 19:40:25.713470] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4c20) with pdu=0x200016ee0ea0 00:20:32.307 [2024-12-05 19:40:25.714891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:15264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:32.307 [2024-12-05 19:40:25.714934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:20:32.307 [2024-12-05 19:40:25.729063] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4c20) with pdu=0x200016ee0630 00:20:32.307 [2024-12-05 19:40:25.730469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:12562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:32.307 [2024-12-05 19:40:25.730514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:20:32.566 [2024-12-05 19:40:25.744487] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4c20) with pdu=0x200016edfdc0 00:20:32.567 [2024-12-05 19:40:25.745916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11856 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:32.567 [2024-12-05 19:40:25.745960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:20:32.567 [2024-12-05 19:40:25.759933] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4c20) with pdu=0x200016edf550 00:20:32.567 [2024-12-05 19:40:25.761371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:32.567 [2024-12-05 19:40:25.761399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:20:32.567 [2024-12-05 19:40:25.775646] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4c20) with pdu=0x200016edece0 00:20:32.567 [2024-12-05 19:40:25.777072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:13156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:32.567 [2024-12-05 19:40:25.777117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:20:32.567 [2024-12-05 19:40:25.791373] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4c20) with pdu=0x200016ede470 00:20:32.567 [2024-12-05 19:40:25.792799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:32.567 [2024-12-05 19:40:25.792843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:20:32.567 [2024-12-05 19:40:25.813271] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4c20) with pdu=0x200016eddc00 00:20:32.567 [2024-12-05 19:40:25.815786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:32.567 [2024-12-05 19:40:25.815829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:32.567 [2024-12-05 19:40:25.829059] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4c20) with pdu=0x200016ede470 00:20:32.567 [2024-12-05 19:40:25.831590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4327 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:32.567 [2024-12-05 19:40:25.831635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:32.567 [2024-12-05 19:40:25.844647] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4c20) with pdu=0x200016edece0 00:20:32.567 [2024-12-05 19:40:25.847342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:13100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:32.567 [2024-12-05 19:40:25.847387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:20:32.567 [2024-12-05 19:40:25.860512] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4c20) with pdu=0x200016edf550 00:20:32.567 [2024-12-05 19:40:25.863011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:20055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:32.567 [2024-12-05 19:40:25.863054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:20:32.567 [2024-12-05 19:40:25.876014] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4c20) with pdu=0x200016edfdc0 00:20:32.567 [2024-12-05 19:40:25.878616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:32.567 [2024-12-05 19:40:25.878661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:20:32.567 [2024-12-05 19:40:25.891644] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4c20) with pdu=0x200016ee0630 00:20:32.567 [2024-12-05 19:40:25.894182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:13750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:32.567 [2024-12-05 19:40:25.894217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:20:32.567 [2024-12-05 19:40:25.906882] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4c20) with pdu=0x200016ee0ea0 00:20:32.567 [2024-12-05 19:40:25.909394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:12301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:32.567 [2024-12-05 19:40:25.909439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:20:32.567 [2024-12-05 19:40:25.922470] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4c20) with pdu=0x200016ee1710 00:20:32.567 [2024-12-05 19:40:25.925001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:7908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:32.567 [2024-12-05 19:40:25.925045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:20:32.567 [2024-12-05 19:40:25.938231] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4c20) with pdu=0x200016ee1f80 00:20:32.567 [2024-12-05 19:40:25.940699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:17803 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:32.567 [2024-12-05 19:40:25.940743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:20:32.567 [2024-12-05 19:40:25.953753] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4c20) with pdu=0x200016ee27f0 00:20:32.567 [2024-12-05 19:40:25.956116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:24364 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:32.567 [2024-12-05 19:40:25.956150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:20:32.567 [2024-12-05 19:40:25.969470] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4c20) with pdu=0x200016ee3060 00:20:32.567 [2024-12-05 19:40:25.971748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:16269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:32.567 [2024-12-05 19:40:25.971787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:20:32.567 [2024-12-05 19:40:25.984791] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4c20) with pdu=0x200016ee38d0 00:20:32.567 [2024-12-05 19:40:25.987085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:3966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:32.567 [2024-12-05 19:40:25.987120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:20:32.567 [2024-12-05 19:40:26.000135] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4c20) with pdu=0x200016ee4140 00:20:32.567 [2024-12-05 19:40:26.002403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:2548 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:32.567 [2024-12-05 19:40:26.002450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:20:32.826 [2024-12-05 19:40:26.015779] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4c20) with pdu=0x200016ee49b0 00:20:32.826 [2024-12-05 19:40:26.018058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:7571 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:32.826 [2024-12-05 19:40:26.018103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:20:32.827 [2024-12-05 19:40:26.031310] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4c20) with pdu=0x200016ee5220 00:20:32.827 [2024-12-05 19:40:26.033509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:2465 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:32.827 [2024-12-05 19:40:26.033578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:32.827 [2024-12-05 19:40:26.046602] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4c20) with pdu=0x200016ee5a90 00:20:32.827 [2024-12-05 19:40:26.048934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:10279 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:32.827 [2024-12-05 19:40:26.048978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:20:32.827 [2024-12-05 19:40:26.062423] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4c20) with pdu=0x200016ee6300 00:20:32.827 [2024-12-05 19:40:26.064759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:2614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:32.827 [2024-12-05 19:40:26.064820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:32.827 [2024-12-05 19:40:26.078377] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4c20) with pdu=0x200016ee6b70 00:20:32.827 [2024-12-05 19:40:26.080773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:2596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:32.827 [2024-12-05 19:40:26.080823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:20:32.827 [2024-12-05 19:40:26.094856] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4c20) with pdu=0x200016ee73e0 00:20:32.827 [2024-12-05 19:40:26.097112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:25264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:32.827 [2024-12-05 19:40:26.097165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:20:32.827 [2024-12-05 19:40:26.111604] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4c20) with pdu=0x200016ee7c50 00:20:32.827 [2024-12-05 19:40:26.113833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:20690 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:32.827 [2024-12-05 19:40:26.113877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:20:32.827 [2024-12-05 19:40:26.127517] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4c20) with pdu=0x200016ee84c0 00:20:32.827 [2024-12-05 19:40:26.129737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:9518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:32.827 [2024-12-05 19:40:26.129782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:20:32.827 [2024-12-05 19:40:26.143515] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4c20) with pdu=0x200016ee8d30 00:20:32.827 [2024-12-05 19:40:26.145704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:12553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:32.827 [2024-12-05 19:40:26.145734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:20:32.827 [2024-12-05 19:40:26.159544] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4c20) with pdu=0x200016ee95a0 00:20:32.827 [2024-12-05 19:40:26.161753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:2446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:32.827 [2024-12-05 19:40:26.161799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:20:32.827 [2024-12-05 19:40:26.175636] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4c20) with pdu=0x200016ee9e10 00:20:32.827 [2024-12-05 19:40:26.177908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:24415 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:32.827 [2024-12-05 19:40:26.177939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:20:32.827 [2024-12-05 19:40:26.191702] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4c20) with pdu=0x200016eea680 00:20:32.827 [2024-12-05 19:40:26.193911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:20267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:32.827 [2024-12-05 19:40:26.193957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:20:32.827 [2024-12-05 19:40:26.208075] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4c20) with pdu=0x200016eeaef0 00:20:32.827 [2024-12-05 19:40:26.210328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:20315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:32.827 [2024-12-05 19:40:26.210358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:20:32.827 [2024-12-05 19:40:26.224222] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4c20) with pdu=0x200016eeb760 00:20:32.827 [2024-12-05 19:40:26.226331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:14722 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:32.827 [2024-12-05 19:40:26.226362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:32.827 [2024-12-05 19:40:26.240643] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4c20) with pdu=0x200016eebfd0 00:20:32.827 [2024-12-05 19:40:26.242691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:3488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:32.827 [2024-12-05 19:40:26.242734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:32.827 [2024-12-05 19:40:26.257183] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4c20) with pdu=0x200016eec840 00:20:32.827 [2024-12-05 19:40:26.259285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:12266 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:32.827 [2024-12-05 19:40:26.259329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:20:33.087 [2024-12-05 19:40:26.273961] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4c20) with pdu=0x200016eed0b0 00:20:33.087 [2024-12-05 19:40:26.276184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:18722 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.087 [2024-12-05 19:40:26.276218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:33.087 [2024-12-05 19:40:26.291008] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4c20) with pdu=0x200016eed920 00:20:33.087 [2024-12-05 19:40:26.293040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:10304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.087 [2024-12-05 19:40:26.293084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:33.087 [2024-12-05 19:40:26.307729] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4c20) with pdu=0x200016eee190 00:20:33.087 [2024-12-05 19:40:26.309764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:5974 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.087 [2024-12-05 19:40:26.309810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:33.087 [2024-12-05 19:40:26.323860] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4c20) with pdu=0x200016eeea00 00:20:33.087 [2024-12-05 19:40:26.325886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.087 [2024-12-05 19:40:26.325941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:33.087 [2024-12-05 19:40:26.340058] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4c20) with pdu=0x200016eef270 00:20:33.087 [2024-12-05 19:40:26.341961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:25435 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.087 [2024-12-05 19:40:26.342020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:20:33.087 [2024-12-05 19:40:26.355743] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4c20) with pdu=0x200016eefae0 00:20:33.088 [2024-12-05 19:40:26.357832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:11246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.088 [2024-12-05 19:40:26.357877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:20:33.088 [2024-12-05 19:40:26.372872] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4c20) with pdu=0x200016ef0350 00:20:33.088 [2024-12-05 19:40:26.374861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:14387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.088 [2024-12-05 19:40:26.374905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:20:33.088 [2024-12-05 19:40:26.390075] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4c20) with pdu=0x200016ef0bc0 00:20:33.088 [2024-12-05 19:40:26.391980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20947 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.088 [2024-12-05 19:40:26.392009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:20:33.088 [2024-12-05 19:40:26.406929] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4c20) with pdu=0x200016ef1430 00:20:33.088 [2024-12-05 19:40:26.408870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:1517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.088 [2024-12-05 19:40:26.408915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:20:33.088 [2024-12-05 19:40:26.422899] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4c20) with pdu=0x200016ef1ca0 00:20:33.088 [2024-12-05 19:40:26.424735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:11552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.088 [2024-12-05 19:40:26.424781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:20:33.088 [2024-12-05 19:40:26.438641] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4c20) with pdu=0x200016ef2510 00:20:33.088 [2024-12-05 19:40:26.440519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:8664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.088 [2024-12-05 19:40:26.440580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:20:33.088 [2024-12-05 19:40:26.454303] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4c20) with pdu=0x200016ef2d80 00:20:33.088 [2024-12-05 19:40:26.456182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:13736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.088 [2024-12-05 19:40:26.456203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:20:33.088 [2024-12-05 19:40:26.470438] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4c20) with pdu=0x200016ef35f0 00:20:33.088 [2024-12-05 19:40:26.472388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:18866 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.088 [2024-12-05 19:40:26.472422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:20:33.088 [2024-12-05 19:40:26.486552] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4c20) with pdu=0x200016ef3e60 00:20:33.088 [2024-12-05 19:40:26.488423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:17631 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.088 [2024-12-05 19:40:26.488451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:20:33.088 [2024-12-05 19:40:26.502519] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4c20) with pdu=0x200016ef46d0 00:20:33.088 [2024-12-05 19:40:26.504351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:20951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.088 [2024-12-05 19:40:26.504380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:20:33.088 [2024-12-05 19:40:26.518334] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4c20) with pdu=0x200016ef4f40 00:20:33.088 [2024-12-05 19:40:26.520144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:17812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.088 [2024-12-05 19:40:26.520170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:20:33.347 [2024-12-05 19:40:26.535005] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4c20) with pdu=0x200016ef57b0 00:20:33.347 [2024-12-05 19:40:26.536815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:1651 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.347 [2024-12-05 19:40:26.536860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:20:33.347 16003.50 IOPS, 62.51 MiB/s [2024-12-05T19:40:26.784Z] [2024-12-05 19:40:26.551732] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4c20) with pdu=0x200016ef6020 00:20:33.347 [2024-12-05 19:40:26.553452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:17053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.347 [2024-12-05 19:40:26.553481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:33.347 00:20:33.347 Latency(us) 00:20:33.347 [2024-12-05T19:40:26.784Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:33.347 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:33.347 nvme0n1 : 2.01 16022.97 62.59 0.00 0.00 7981.37 6404.65 29789.09 00:20:33.347 [2024-12-05T19:40:26.784Z] =================================================================================================================== 00:20:33.347 [2024-12-05T19:40:26.784Z] Total : 16022.97 62.59 0.00 0.00 7981.37 6404.65 29789.09 00:20:33.347 { 00:20:33.347 "results": [ 00:20:33.347 { 00:20:33.347 "job": "nvme0n1", 00:20:33.347 "core_mask": "0x2", 00:20:33.347 "workload": "randwrite", 00:20:33.347 "status": "finished", 00:20:33.347 "queue_depth": 128, 00:20:33.347 "io_size": 4096, 00:20:33.347 "runtime": 2.005558, 00:20:33.347 "iops": 16022.972160366342, 00:20:33.347 "mibps": 62.58973500143102, 00:20:33.347 "io_failed": 0, 00:20:33.347 "io_timeout": 0, 00:20:33.347 "avg_latency_us": 7981.367905964892, 00:20:33.347 "min_latency_us": 6404.654545454546, 00:20:33.347 "max_latency_us": 29789.090909090908 00:20:33.347 } 00:20:33.347 ], 00:20:33.347 "core_count": 1 00:20:33.347 } 00:20:33.347 19:40:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:20:33.347 19:40:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:20:33.347 19:40:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:20:33.347 | .driver_specific 00:20:33.347 | .nvme_error 00:20:33.347 | .status_code 00:20:33.347 | .command_transient_transport_error' 00:20:33.347 19:40:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:20:33.606 19:40:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 126 > 0 )) 00:20:33.606 19:40:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80352 00:20:33.606 19:40:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 80352 ']' 00:20:33.606 19:40:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 80352 00:20:33.606 19:40:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:20:33.606 19:40:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:33.606 19:40:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80352 00:20:33.606 19:40:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:33.606 19:40:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:33.606 killing process with pid 80352 00:20:33.606 19:40:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80352' 00:20:33.606 Received shutdown signal, test time was about 2.000000 seconds 00:20:33.606 00:20:33.606 Latency(us) 00:20:33.606 [2024-12-05T19:40:27.043Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:33.606 [2024-12-05T19:40:27.043Z] =================================================================================================================== 00:20:33.606 [2024-12-05T19:40:27.043Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:33.606 19:40:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 80352 00:20:33.606 19:40:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 80352 00:20:33.865 19:40:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:20:33.865 19:40:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:20:33.865 19:40:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:20:33.865 19:40:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:20:33.865 19:40:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:20:33.865 19:40:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80407 00:20:33.865 19:40:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:20:33.865 19:40:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80407 /var/tmp/bperf.sock 00:20:33.865 19:40:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 80407 ']' 00:20:33.865 19:40:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:33.865 19:40:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:33.865 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:33.865 19:40:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:33.865 19:40:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:33.865 19:40:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:33.865 [2024-12-05 19:40:27.133342] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:20:33.865 [2024-12-05 19:40:27.133481] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80407 ] 00:20:33.865 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:33.865 Zero copy mechanism will not be used. 00:20:33.865 [2024-12-05 19:40:27.285427] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:34.124 [2024-12-05 19:40:27.349035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:34.124 [2024-12-05 19:40:27.406407] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:34.124 19:40:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:34.124 19:40:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:20:34.124 19:40:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:34.124 19:40:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:34.383 19:40:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:20:34.383 19:40:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.383 19:40:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:34.383 19:40:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.383 19:40:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:34.383 19:40:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:34.961 nvme0n1 00:20:34.961 19:40:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:20:34.961 19:40:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.961 19:40:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:34.961 19:40:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.961 19:40:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:20:34.961 19:40:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:34.961 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:34.961 Zero copy mechanism will not be used. 00:20:34.961 Running I/O for 2 seconds... 00:20:34.961 [2024-12-05 19:40:28.215799] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:34.961 [2024-12-05 19:40:28.215983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.961 [2024-12-05 19:40:28.216015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:34.961 [2024-12-05 19:40:28.221531] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:34.961 [2024-12-05 19:40:28.221648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.961 [2024-12-05 19:40:28.221671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:34.961 [2024-12-05 19:40:28.226640] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:34.961 [2024-12-05 19:40:28.226727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.961 [2024-12-05 19:40:28.226748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:34.961 [2024-12-05 19:40:28.231803] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:34.961 [2024-12-05 19:40:28.231942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.961 [2024-12-05 19:40:28.231965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:34.961 [2024-12-05 19:40:28.237024] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:34.961 [2024-12-05 19:40:28.237127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.961 [2024-12-05 19:40:28.237149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:34.961 [2024-12-05 19:40:28.242112] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:34.961 [2024-12-05 19:40:28.242226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.961 [2024-12-05 19:40:28.242248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:34.961 [2024-12-05 19:40:28.247144] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:34.961 [2024-12-05 19:40:28.247250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.961 [2024-12-05 19:40:28.247272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:34.961 [2024-12-05 19:40:28.252160] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:34.961 [2024-12-05 19:40:28.252264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.961 [2024-12-05 19:40:28.252284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:34.961 [2024-12-05 19:40:28.257310] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:34.961 [2024-12-05 19:40:28.257402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.961 [2024-12-05 19:40:28.257424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:34.961 [2024-12-05 19:40:28.262289] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:34.961 [2024-12-05 19:40:28.262375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.962 [2024-12-05 19:40:28.262395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:34.962 [2024-12-05 19:40:28.267326] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:34.962 [2024-12-05 19:40:28.267418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.962 [2024-12-05 19:40:28.267440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:34.962 [2024-12-05 19:40:28.272647] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:34.962 [2024-12-05 19:40:28.272735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.962 [2024-12-05 19:40:28.272756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:34.962 [2024-12-05 19:40:28.277890] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:34.962 [2024-12-05 19:40:28.277994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.962 [2024-12-05 19:40:28.278015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:34.962 [2024-12-05 19:40:28.283211] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:34.962 [2024-12-05 19:40:28.283315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.962 [2024-12-05 19:40:28.283336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:34.962 [2024-12-05 19:40:28.288378] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:34.962 [2024-12-05 19:40:28.288477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.962 [2024-12-05 19:40:28.288497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:34.962 [2024-12-05 19:40:28.293488] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:34.962 [2024-12-05 19:40:28.293560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.962 [2024-12-05 19:40:28.293581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:34.962 [2024-12-05 19:40:28.298829] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:34.962 [2024-12-05 19:40:28.298930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.962 [2024-12-05 19:40:28.298951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:34.962 [2024-12-05 19:40:28.304024] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:34.962 [2024-12-05 19:40:28.304096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.962 [2024-12-05 19:40:28.304117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:34.962 [2024-12-05 19:40:28.309069] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:34.962 [2024-12-05 19:40:28.309166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.962 [2024-12-05 19:40:28.309198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:34.962 [2024-12-05 19:40:28.314222] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:34.962 [2024-12-05 19:40:28.314294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.962 [2024-12-05 19:40:28.314314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:34.962 [2024-12-05 19:40:28.319659] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:34.962 [2024-12-05 19:40:28.319743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.962 [2024-12-05 19:40:28.319764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:34.962 [2024-12-05 19:40:28.325027] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:34.962 [2024-12-05 19:40:28.325099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.962 [2024-12-05 19:40:28.325122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:34.962 [2024-12-05 19:40:28.330402] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:34.962 [2024-12-05 19:40:28.330474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.962 [2024-12-05 19:40:28.330495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:34.962 [2024-12-05 19:40:28.335784] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:34.962 [2024-12-05 19:40:28.335915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.962 [2024-12-05 19:40:28.335938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:34.962 [2024-12-05 19:40:28.341235] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:34.962 [2024-12-05 19:40:28.341380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.962 [2024-12-05 19:40:28.341401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:34.962 [2024-12-05 19:40:28.346835] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:34.962 [2024-12-05 19:40:28.346917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.962 [2024-12-05 19:40:28.346937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:34.962 [2024-12-05 19:40:28.352320] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:34.962 [2024-12-05 19:40:28.352417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.962 [2024-12-05 19:40:28.352438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:34.962 [2024-12-05 19:40:28.357655] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:34.962 [2024-12-05 19:40:28.357777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.962 [2024-12-05 19:40:28.357796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:34.962 [2024-12-05 19:40:28.362930] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:34.962 [2024-12-05 19:40:28.363022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.962 [2024-12-05 19:40:28.363043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:34.962 [2024-12-05 19:40:28.368305] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:34.962 [2024-12-05 19:40:28.368400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.962 [2024-12-05 19:40:28.368421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:34.962 [2024-12-05 19:40:28.373330] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:34.962 [2024-12-05 19:40:28.373524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.962 [2024-12-05 19:40:28.373548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:34.962 [2024-12-05 19:40:28.378788] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:34.962 [2024-12-05 19:40:28.378929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.962 [2024-12-05 19:40:28.378951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:34.962 [2024-12-05 19:40:28.384193] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:34.962 [2024-12-05 19:40:28.384266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.962 [2024-12-05 19:40:28.384287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:35.229 [2024-12-05 19:40:28.389339] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.229 [2024-12-05 19:40:28.389411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.229 [2024-12-05 19:40:28.389432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:35.229 [2024-12-05 19:40:28.394402] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.229 [2024-12-05 19:40:28.394490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.229 [2024-12-05 19:40:28.394510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:35.229 [2024-12-05 19:40:28.399836] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.229 [2024-12-05 19:40:28.399922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.229 [2024-12-05 19:40:28.399944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:35.229 [2024-12-05 19:40:28.405182] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.229 [2024-12-05 19:40:28.405271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.229 [2024-12-05 19:40:28.405303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:35.229 [2024-12-05 19:40:28.410290] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.229 [2024-12-05 19:40:28.410356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.229 [2024-12-05 19:40:28.410392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:35.229 [2024-12-05 19:40:28.415450] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.229 [2024-12-05 19:40:28.415535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.229 [2024-12-05 19:40:28.415568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:35.229 [2024-12-05 19:40:28.420693] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.229 [2024-12-05 19:40:28.420781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.229 [2024-12-05 19:40:28.420802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:35.229 [2024-12-05 19:40:28.425990] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.229 [2024-12-05 19:40:28.426076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.229 [2024-12-05 19:40:28.426098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:35.229 [2024-12-05 19:40:28.431458] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.229 [2024-12-05 19:40:28.431565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.229 [2024-12-05 19:40:28.431587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:35.229 [2024-12-05 19:40:28.436967] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.229 [2024-12-05 19:40:28.437089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.229 [2024-12-05 19:40:28.437110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:35.229 [2024-12-05 19:40:28.442605] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.229 [2024-12-05 19:40:28.442708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.229 [2024-12-05 19:40:28.442744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:35.229 [2024-12-05 19:40:28.448000] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.229 [2024-12-05 19:40:28.448070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.229 [2024-12-05 19:40:28.448092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:35.229 [2024-12-05 19:40:28.453246] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.229 [2024-12-05 19:40:28.453343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.229 [2024-12-05 19:40:28.453364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:35.229 [2024-12-05 19:40:28.458762] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.229 [2024-12-05 19:40:28.458886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.229 [2024-12-05 19:40:28.458908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:35.229 [2024-12-05 19:40:28.464222] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.229 [2024-12-05 19:40:28.464300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.229 [2024-12-05 19:40:28.464327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:35.229 [2024-12-05 19:40:28.469740] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.229 [2024-12-05 19:40:28.469835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.229 [2024-12-05 19:40:28.469856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:35.229 [2024-12-05 19:40:28.475000] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.229 [2024-12-05 19:40:28.475085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.229 [2024-12-05 19:40:28.475106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:35.229 [2024-12-05 19:40:28.480447] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.229 [2024-12-05 19:40:28.480555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.229 [2024-12-05 19:40:28.480575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:35.229 [2024-12-05 19:40:28.485914] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.229 [2024-12-05 19:40:28.485999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.229 [2024-12-05 19:40:28.486021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:35.229 [2024-12-05 19:40:28.491091] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.229 [2024-12-05 19:40:28.491211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.229 [2024-12-05 19:40:28.491233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:35.229 [2024-12-05 19:40:28.496307] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.229 [2024-12-05 19:40:28.496409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.229 [2024-12-05 19:40:28.496430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:35.229 [2024-12-05 19:40:28.501597] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.229 [2024-12-05 19:40:28.501684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.229 [2024-12-05 19:40:28.501704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:35.229 [2024-12-05 19:40:28.506839] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.229 [2024-12-05 19:40:28.506923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.229 [2024-12-05 19:40:28.506946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:35.229 [2024-12-05 19:40:28.512060] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.229 [2024-12-05 19:40:28.512299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.229 [2024-12-05 19:40:28.512321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:35.229 [2024-12-05 19:40:28.517570] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.229 [2024-12-05 19:40:28.517672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.229 [2024-12-05 19:40:28.517693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:35.229 [2024-12-05 19:40:28.522863] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.229 [2024-12-05 19:40:28.522936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.230 [2024-12-05 19:40:28.522958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:35.230 [2024-12-05 19:40:28.528121] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.230 [2024-12-05 19:40:28.528222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.230 [2024-12-05 19:40:28.528261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:35.230 [2024-12-05 19:40:28.533583] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.230 [2024-12-05 19:40:28.533693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.230 [2024-12-05 19:40:28.533715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:35.230 [2024-12-05 19:40:28.538734] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.230 [2024-12-05 19:40:28.538806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.230 [2024-12-05 19:40:28.538828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:35.230 [2024-12-05 19:40:28.544113] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.230 [2024-12-05 19:40:28.544227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.230 [2024-12-05 19:40:28.544247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:35.230 [2024-12-05 19:40:28.549605] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.230 [2024-12-05 19:40:28.549716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.230 [2024-12-05 19:40:28.549737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:35.230 [2024-12-05 19:40:28.554807] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.230 [2024-12-05 19:40:28.554891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.230 [2024-12-05 19:40:28.554912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:35.230 [2024-12-05 19:40:28.559939] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.230 [2024-12-05 19:40:28.560024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.230 [2024-12-05 19:40:28.560046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:35.230 [2024-12-05 19:40:28.564976] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.230 [2024-12-05 19:40:28.565061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.230 [2024-12-05 19:40:28.565082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:35.230 [2024-12-05 19:40:28.570334] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.230 [2024-12-05 19:40:28.570420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.230 [2024-12-05 19:40:28.570443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:35.230 [2024-12-05 19:40:28.575435] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.230 [2024-12-05 19:40:28.575520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.230 [2024-12-05 19:40:28.575541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:35.230 [2024-12-05 19:40:28.580602] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.230 [2024-12-05 19:40:28.580701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.230 [2024-12-05 19:40:28.580722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:35.230 [2024-12-05 19:40:28.585906] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.230 [2024-12-05 19:40:28.585990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.230 [2024-12-05 19:40:28.586014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:35.230 [2024-12-05 19:40:28.591099] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.230 [2024-12-05 19:40:28.591224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.230 [2024-12-05 19:40:28.591245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:35.230 [2024-12-05 19:40:28.596252] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.230 [2024-12-05 19:40:28.596323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.230 [2024-12-05 19:40:28.596345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:35.230 [2024-12-05 19:40:28.601583] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.230 [2024-12-05 19:40:28.601685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.230 [2024-12-05 19:40:28.601706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:35.230 [2024-12-05 19:40:28.606765] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.230 [2024-12-05 19:40:28.606888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.230 [2024-12-05 19:40:28.606909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:35.230 [2024-12-05 19:40:28.611956] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.230 [2024-12-05 19:40:28.612050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.230 [2024-12-05 19:40:28.612072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:35.230 [2024-12-05 19:40:28.617249] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.230 [2024-12-05 19:40:28.617333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.230 [2024-12-05 19:40:28.617354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:35.230 [2024-12-05 19:40:28.622533] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.230 [2024-12-05 19:40:28.622681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.230 [2024-12-05 19:40:28.622702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:35.230 [2024-12-05 19:40:28.627748] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.230 [2024-12-05 19:40:28.627922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.230 [2024-12-05 19:40:28.627944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:35.230 [2024-12-05 19:40:28.632979] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.230 [2024-12-05 19:40:28.633079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.230 [2024-12-05 19:40:28.633099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:35.230 [2024-12-05 19:40:28.638099] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.230 [2024-12-05 19:40:28.638244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.230 [2024-12-05 19:40:28.638264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:35.230 [2024-12-05 19:40:28.643331] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.230 [2024-12-05 19:40:28.643404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.230 [2024-12-05 19:40:28.643426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:35.230 [2024-12-05 19:40:28.648560] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.230 [2024-12-05 19:40:28.648671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.230 [2024-12-05 19:40:28.648691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:35.230 [2024-12-05 19:40:28.653684] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.230 [2024-12-05 19:40:28.653783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.230 [2024-12-05 19:40:28.653804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:35.230 [2024-12-05 19:40:28.659076] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.230 [2024-12-05 19:40:28.659216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.230 [2024-12-05 19:40:28.659239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:35.489 [2024-12-05 19:40:28.664389] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.489 [2024-12-05 19:40:28.664477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.489 [2024-12-05 19:40:28.664497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:35.489 [2024-12-05 19:40:28.669634] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.489 [2024-12-05 19:40:28.669734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.489 [2024-12-05 19:40:28.669755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:35.489 [2024-12-05 19:40:28.674746] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.489 [2024-12-05 19:40:28.674880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.489 [2024-12-05 19:40:28.674902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:35.489 [2024-12-05 19:40:28.680063] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.489 [2024-12-05 19:40:28.680190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.489 [2024-12-05 19:40:28.680211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:35.489 [2024-12-05 19:40:28.685203] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.489 [2024-12-05 19:40:28.685331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.489 [2024-12-05 19:40:28.685352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:35.489 [2024-12-05 19:40:28.690361] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.489 [2024-12-05 19:40:28.690473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.489 [2024-12-05 19:40:28.690494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:35.489 [2024-12-05 19:40:28.695537] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.489 [2024-12-05 19:40:28.695671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.489 [2024-12-05 19:40:28.695693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:35.489 [2024-12-05 19:40:28.700719] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.489 [2024-12-05 19:40:28.700829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.489 [2024-12-05 19:40:28.700850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:35.489 [2024-12-05 19:40:28.705781] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.489 [2024-12-05 19:40:28.705895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.489 [2024-12-05 19:40:28.705916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:35.489 [2024-12-05 19:40:28.710909] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.489 [2024-12-05 19:40:28.711039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.489 [2024-12-05 19:40:28.711061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:35.489 [2024-12-05 19:40:28.716141] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.489 [2024-12-05 19:40:28.716304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.489 [2024-12-05 19:40:28.716325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:35.489 [2024-12-05 19:40:28.721449] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.489 [2024-12-05 19:40:28.721551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.489 [2024-12-05 19:40:28.721572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:35.489 [2024-12-05 19:40:28.726703] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.489 [2024-12-05 19:40:28.726806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.489 [2024-12-05 19:40:28.726828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:35.489 [2024-12-05 19:40:28.731775] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.489 [2024-12-05 19:40:28.731875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.489 [2024-12-05 19:40:28.731925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:35.490 [2024-12-05 19:40:28.736824] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.490 [2024-12-05 19:40:28.736925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.490 [2024-12-05 19:40:28.736946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:35.490 [2024-12-05 19:40:28.741911] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.490 [2024-12-05 19:40:28.742029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.490 [2024-12-05 19:40:28.742051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:35.490 [2024-12-05 19:40:28.747184] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.490 [2024-12-05 19:40:28.747289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.490 [2024-12-05 19:40:28.747310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:35.490 [2024-12-05 19:40:28.752464] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.490 [2024-12-05 19:40:28.752565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.490 [2024-12-05 19:40:28.752587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:35.490 [2024-12-05 19:40:28.757534] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.490 [2024-12-05 19:40:28.757650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.490 [2024-12-05 19:40:28.757671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:35.490 [2024-12-05 19:40:28.762860] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.490 [2024-12-05 19:40:28.762960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.490 [2024-12-05 19:40:28.762981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:35.490 [2024-12-05 19:40:28.768034] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.490 [2024-12-05 19:40:28.768130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.490 [2024-12-05 19:40:28.768163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:35.490 [2024-12-05 19:40:28.773297] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.490 [2024-12-05 19:40:28.773372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.490 [2024-12-05 19:40:28.773395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:35.490 [2024-12-05 19:40:28.778588] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.490 [2024-12-05 19:40:28.778689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.490 [2024-12-05 19:40:28.778710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:35.490 [2024-12-05 19:40:28.783631] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.490 [2024-12-05 19:40:28.783745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.490 [2024-12-05 19:40:28.783765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:35.490 [2024-12-05 19:40:28.788895] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.490 [2024-12-05 19:40:28.788982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.490 [2024-12-05 19:40:28.789003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:35.490 [2024-12-05 19:40:28.794131] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.490 [2024-12-05 19:40:28.794255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.490 [2024-12-05 19:40:28.794278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:35.490 [2024-12-05 19:40:28.799386] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.490 [2024-12-05 19:40:28.799487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.490 [2024-12-05 19:40:28.799508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:35.490 [2024-12-05 19:40:28.804569] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.490 [2024-12-05 19:40:28.804703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.490 [2024-12-05 19:40:28.804726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:35.490 [2024-12-05 19:40:28.809811] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.490 [2024-12-05 19:40:28.809923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.490 [2024-12-05 19:40:28.809943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:35.490 [2024-12-05 19:40:28.814951] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.490 [2024-12-05 19:40:28.815063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.490 [2024-12-05 19:40:28.815083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:35.490 [2024-12-05 19:40:28.820046] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.490 [2024-12-05 19:40:28.820197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.490 [2024-12-05 19:40:28.820219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:35.490 [2024-12-05 19:40:28.825447] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.490 [2024-12-05 19:40:28.825565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.490 [2024-12-05 19:40:28.825586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:35.490 [2024-12-05 19:40:28.830824] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.490 [2024-12-05 19:40:28.830954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.490 [2024-12-05 19:40:28.830976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:35.490 [2024-12-05 19:40:28.836022] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.490 [2024-12-05 19:40:28.836113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.490 [2024-12-05 19:40:28.836136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:35.490 [2024-12-05 19:40:28.841171] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.491 [2024-12-05 19:40:28.841336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.491 [2024-12-05 19:40:28.841357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:35.491 [2024-12-05 19:40:28.846569] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.491 [2024-12-05 19:40:28.846654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.491 [2024-12-05 19:40:28.846674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:35.491 [2024-12-05 19:40:28.851871] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.491 [2024-12-05 19:40:28.851985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.491 [2024-12-05 19:40:28.852022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:35.491 [2024-12-05 19:40:28.857318] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.491 [2024-12-05 19:40:28.857428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.491 [2024-12-05 19:40:28.857449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:35.491 [2024-12-05 19:40:28.862615] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.491 [2024-12-05 19:40:28.862746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.491 [2024-12-05 19:40:28.862769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:35.491 [2024-12-05 19:40:28.867643] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.491 [2024-12-05 19:40:28.867738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.491 [2024-12-05 19:40:28.867759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:35.491 [2024-12-05 19:40:28.872826] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.491 [2024-12-05 19:40:28.872925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.491 [2024-12-05 19:40:28.872946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:35.491 [2024-12-05 19:40:28.878051] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.491 [2024-12-05 19:40:28.878146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.491 [2024-12-05 19:40:28.878167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:35.491 [2024-12-05 19:40:28.883285] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.491 [2024-12-05 19:40:28.883381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.491 [2024-12-05 19:40:28.883402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:35.491 [2024-12-05 19:40:28.888481] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.491 [2024-12-05 19:40:28.888566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.491 [2024-12-05 19:40:28.888587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:35.491 [2024-12-05 19:40:28.893741] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.491 [2024-12-05 19:40:28.893832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.491 [2024-12-05 19:40:28.893853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:35.491 [2024-12-05 19:40:28.898911] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.491 [2024-12-05 19:40:28.898987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.491 [2024-12-05 19:40:28.899009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:35.491 [2024-12-05 19:40:28.904342] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.491 [2024-12-05 19:40:28.904435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.491 [2024-12-05 19:40:28.904456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:35.491 [2024-12-05 19:40:28.909549] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.491 [2024-12-05 19:40:28.909641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.491 [2024-12-05 19:40:28.909662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:35.491 [2024-12-05 19:40:28.914918] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.491 [2024-12-05 19:40:28.915080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.491 [2024-12-05 19:40:28.915101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:35.491 [2024-12-05 19:40:28.920003] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.491 [2024-12-05 19:40:28.920104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.491 [2024-12-05 19:40:28.920126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:35.750 [2024-12-05 19:40:28.925070] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.750 [2024-12-05 19:40:28.925177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.750 [2024-12-05 19:40:28.925197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:35.750 [2024-12-05 19:40:28.930326] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.750 [2024-12-05 19:40:28.930461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.750 [2024-12-05 19:40:28.930482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:35.750 [2024-12-05 19:40:28.935461] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.750 [2024-12-05 19:40:28.935564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.750 [2024-12-05 19:40:28.935585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:35.750 [2024-12-05 19:40:28.940556] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.750 [2024-12-05 19:40:28.940656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.750 [2024-12-05 19:40:28.940677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:35.750 [2024-12-05 19:40:28.945882] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.750 [2024-12-05 19:40:28.945983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.750 [2024-12-05 19:40:28.946004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:35.750 [2024-12-05 19:40:28.951216] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.750 [2024-12-05 19:40:28.951325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.750 [2024-12-05 19:40:28.951345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:35.750 [2024-12-05 19:40:28.956410] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.750 [2024-12-05 19:40:28.956528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.750 [2024-12-05 19:40:28.956550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:35.750 [2024-12-05 19:40:28.961376] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.750 [2024-12-05 19:40:28.961475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.750 [2024-12-05 19:40:28.961495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:35.750 [2024-12-05 19:40:28.966283] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.750 [2024-12-05 19:40:28.966398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.750 [2024-12-05 19:40:28.966419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:35.750 [2024-12-05 19:40:28.971275] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.750 [2024-12-05 19:40:28.971405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.750 [2024-12-05 19:40:28.971427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:35.750 [2024-12-05 19:40:28.976488] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.750 [2024-12-05 19:40:28.976630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.750 [2024-12-05 19:40:28.976650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:35.750 [2024-12-05 19:40:28.981641] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.750 [2024-12-05 19:40:28.981741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.750 [2024-12-05 19:40:28.981764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:35.750 [2024-12-05 19:40:28.986966] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.750 [2024-12-05 19:40:28.987040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.750 [2024-12-05 19:40:28.987061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:35.750 [2024-12-05 19:40:28.992249] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.750 [2024-12-05 19:40:28.992349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.750 [2024-12-05 19:40:28.992371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:35.750 [2024-12-05 19:40:28.997668] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.750 [2024-12-05 19:40:28.997806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.750 [2024-12-05 19:40:28.997829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:35.750 [2024-12-05 19:40:29.003300] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.750 [2024-12-05 19:40:29.003433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.750 [2024-12-05 19:40:29.003456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:35.750 [2024-12-05 19:40:29.008499] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.750 [2024-12-05 19:40:29.008619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.750 [2024-12-05 19:40:29.008642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:35.750 [2024-12-05 19:40:29.013948] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.750 [2024-12-05 19:40:29.014050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.750 [2024-12-05 19:40:29.014072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:35.750 [2024-12-05 19:40:29.019404] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.750 [2024-12-05 19:40:29.019515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.750 [2024-12-05 19:40:29.019537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:35.750 [2024-12-05 19:40:29.024588] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.750 [2024-12-05 19:40:29.024661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.750 [2024-12-05 19:40:29.024697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:35.750 [2024-12-05 19:40:29.029842] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.750 [2024-12-05 19:40:29.029943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.750 [2024-12-05 19:40:29.029964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:35.750 [2024-12-05 19:40:29.035143] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.750 [2024-12-05 19:40:29.035260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.750 [2024-12-05 19:40:29.035281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:35.750 [2024-12-05 19:40:29.040429] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.750 [2024-12-05 19:40:29.040621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.750 [2024-12-05 19:40:29.040643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:35.750 [2024-12-05 19:40:29.045782] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.750 [2024-12-05 19:40:29.045885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.750 [2024-12-05 19:40:29.045907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:35.750 [2024-12-05 19:40:29.051211] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.750 [2024-12-05 19:40:29.051337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.750 [2024-12-05 19:40:29.051358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:35.750 [2024-12-05 19:40:29.056434] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.750 [2024-12-05 19:40:29.056550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.750 [2024-12-05 19:40:29.056572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:35.750 [2024-12-05 19:40:29.061740] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.750 [2024-12-05 19:40:29.061863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.750 [2024-12-05 19:40:29.061885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:35.750 [2024-12-05 19:40:29.067105] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.750 [2024-12-05 19:40:29.067199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.750 [2024-12-05 19:40:29.067222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:35.750 [2024-12-05 19:40:29.072318] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.750 [2024-12-05 19:40:29.072403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.750 [2024-12-05 19:40:29.072425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:35.750 [2024-12-05 19:40:29.077487] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.750 [2024-12-05 19:40:29.077577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.750 [2024-12-05 19:40:29.077599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:35.750 [2024-12-05 19:40:29.082595] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.750 [2024-12-05 19:40:29.082696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.750 [2024-12-05 19:40:29.082719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:35.750 [2024-12-05 19:40:29.087660] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.750 [2024-12-05 19:40:29.087781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.750 [2024-12-05 19:40:29.087807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:35.750 [2024-12-05 19:40:29.092827] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.750 [2024-12-05 19:40:29.092943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.750 [2024-12-05 19:40:29.092964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:35.750 [2024-12-05 19:40:29.098041] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.750 [2024-12-05 19:40:29.098144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.750 [2024-12-05 19:40:29.098166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:35.750 [2024-12-05 19:40:29.103452] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.750 [2024-12-05 19:40:29.103566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.750 [2024-12-05 19:40:29.103589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:35.751 [2024-12-05 19:40:29.108797] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.751 [2024-12-05 19:40:29.108912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.751 [2024-12-05 19:40:29.108933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:35.751 [2024-12-05 19:40:29.114304] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.751 [2024-12-05 19:40:29.114401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.751 [2024-12-05 19:40:29.114423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:35.751 [2024-12-05 19:40:29.119654] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.751 [2024-12-05 19:40:29.119756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.751 [2024-12-05 19:40:29.119777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:35.751 [2024-12-05 19:40:29.124838] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.751 [2024-12-05 19:40:29.124950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.751 [2024-12-05 19:40:29.124971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:35.751 [2024-12-05 19:40:29.130272] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.751 [2024-12-05 19:40:29.130388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.751 [2024-12-05 19:40:29.130409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:35.751 [2024-12-05 19:40:29.135605] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.751 [2024-12-05 19:40:29.135734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.751 [2024-12-05 19:40:29.135756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:35.751 [2024-12-05 19:40:29.140770] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.751 [2024-12-05 19:40:29.140922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.751 [2024-12-05 19:40:29.140944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:35.751 [2024-12-05 19:40:29.146115] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.751 [2024-12-05 19:40:29.146264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.751 [2024-12-05 19:40:29.146285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:35.751 [2024-12-05 19:40:29.151478] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.751 [2024-12-05 19:40:29.151584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.751 [2024-12-05 19:40:29.151606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:35.751 [2024-12-05 19:40:29.156865] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.751 [2024-12-05 19:40:29.156977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.751 [2024-12-05 19:40:29.156998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:35.751 [2024-12-05 19:40:29.162274] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.751 [2024-12-05 19:40:29.162389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.751 [2024-12-05 19:40:29.162412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:35.751 [2024-12-05 19:40:29.167521] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.751 [2024-12-05 19:40:29.167647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.751 [2024-12-05 19:40:29.167668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:35.751 [2024-12-05 19:40:29.172938] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.751 [2024-12-05 19:40:29.173053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.751 [2024-12-05 19:40:29.173076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:35.751 [2024-12-05 19:40:29.178190] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.751 [2024-12-05 19:40:29.178300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.751 [2024-12-05 19:40:29.178322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:35.751 [2024-12-05 19:40:29.183392] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:35.751 [2024-12-05 19:40:29.183487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.751 [2024-12-05 19:40:29.183523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:36.010 [2024-12-05 19:40:29.188716] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.010 [2024-12-05 19:40:29.188791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.010 [2024-12-05 19:40:29.188814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:36.010 [2024-12-05 19:40:29.193922] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.010 [2024-12-05 19:40:29.194024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.010 [2024-12-05 19:40:29.194045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:36.010 [2024-12-05 19:40:29.199280] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.010 [2024-12-05 19:40:29.199381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.010 [2024-12-05 19:40:29.199403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:36.010 [2024-12-05 19:40:29.204700] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.010 [2024-12-05 19:40:29.204801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.010 [2024-12-05 19:40:29.204824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:36.010 5844.00 IOPS, 730.50 MiB/s [2024-12-05T19:40:29.447Z] [2024-12-05 19:40:29.210978] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.010 [2024-12-05 19:40:29.211076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.010 [2024-12-05 19:40:29.211098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:36.010 [2024-12-05 19:40:29.216210] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.010 [2024-12-05 19:40:29.216336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.010 [2024-12-05 19:40:29.216357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:36.010 [2024-12-05 19:40:29.221287] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.010 [2024-12-05 19:40:29.221372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.010 [2024-12-05 19:40:29.221392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:36.010 [2024-12-05 19:40:29.226597] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.010 [2024-12-05 19:40:29.226722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.010 [2024-12-05 19:40:29.226743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:36.010 [2024-12-05 19:40:29.231930] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.010 [2024-12-05 19:40:29.232021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.010 [2024-12-05 19:40:29.232043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:36.010 [2024-12-05 19:40:29.237119] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.010 [2024-12-05 19:40:29.237262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.010 [2024-12-05 19:40:29.237301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:36.010 [2024-12-05 19:40:29.242463] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.010 [2024-12-05 19:40:29.242573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.010 [2024-12-05 19:40:29.242594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:36.010 [2024-12-05 19:40:29.247699] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.010 [2024-12-05 19:40:29.247794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.010 [2024-12-05 19:40:29.247815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:36.010 [2024-12-05 19:40:29.252940] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.010 [2024-12-05 19:40:29.253039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.010 [2024-12-05 19:40:29.253061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:36.010 [2024-12-05 19:40:29.258085] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.010 [2024-12-05 19:40:29.258198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.010 [2024-12-05 19:40:29.258219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:36.010 [2024-12-05 19:40:29.263281] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.010 [2024-12-05 19:40:29.263393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.010 [2024-12-05 19:40:29.263413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:36.010 [2024-12-05 19:40:29.268644] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.010 [2024-12-05 19:40:29.268752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.010 [2024-12-05 19:40:29.268773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:36.010 [2024-12-05 19:40:29.273850] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.010 [2024-12-05 19:40:29.273949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.010 [2024-12-05 19:40:29.273970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:36.010 [2024-12-05 19:40:29.278995] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.010 [2024-12-05 19:40:29.279111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.010 [2024-12-05 19:40:29.279133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:36.010 [2024-12-05 19:40:29.284297] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.010 [2024-12-05 19:40:29.284398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.010 [2024-12-05 19:40:29.284419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:36.010 [2024-12-05 19:40:29.289494] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.011 [2024-12-05 19:40:29.289592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.011 [2024-12-05 19:40:29.289613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:36.011 [2024-12-05 19:40:29.294775] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.011 [2024-12-05 19:40:29.294850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.011 [2024-12-05 19:40:29.294873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:36.011 [2024-12-05 19:40:29.300223] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.011 [2024-12-05 19:40:29.300293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.011 [2024-12-05 19:40:29.300313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:36.011 [2024-12-05 19:40:29.305530] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.011 [2024-12-05 19:40:29.305618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.011 [2024-12-05 19:40:29.305641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:36.011 [2024-12-05 19:40:29.310829] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.011 [2024-12-05 19:40:29.310903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.011 [2024-12-05 19:40:29.310926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:36.011 [2024-12-05 19:40:29.316261] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.011 [2024-12-05 19:40:29.316361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.011 [2024-12-05 19:40:29.316383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:36.011 [2024-12-05 19:40:29.321542] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.011 [2024-12-05 19:40:29.321629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.011 [2024-12-05 19:40:29.321650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:36.011 [2024-12-05 19:40:29.326625] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.011 [2024-12-05 19:40:29.326725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.011 [2024-12-05 19:40:29.326746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:36.011 [2024-12-05 19:40:29.331952] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.011 [2024-12-05 19:40:29.332053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.011 [2024-12-05 19:40:29.332075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:36.011 [2024-12-05 19:40:29.337294] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.011 [2024-12-05 19:40:29.337405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.011 [2024-12-05 19:40:29.337425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:36.011 [2024-12-05 19:40:29.342434] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.011 [2024-12-05 19:40:29.342535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.011 [2024-12-05 19:40:29.342557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:36.011 [2024-12-05 19:40:29.347783] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.011 [2024-12-05 19:40:29.347878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.011 [2024-12-05 19:40:29.347926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:36.011 [2024-12-05 19:40:29.353222] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.011 [2024-12-05 19:40:29.353310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.011 [2024-12-05 19:40:29.353332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:36.011 [2024-12-05 19:40:29.358751] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.011 [2024-12-05 19:40:29.358842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.011 [2024-12-05 19:40:29.358865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:36.011 [2024-12-05 19:40:29.364328] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.011 [2024-12-05 19:40:29.364402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.011 [2024-12-05 19:40:29.364425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:36.011 [2024-12-05 19:40:29.369751] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.011 [2024-12-05 19:40:29.369852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.011 [2024-12-05 19:40:29.369873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:36.011 [2024-12-05 19:40:29.375273] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.011 [2024-12-05 19:40:29.375359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.011 [2024-12-05 19:40:29.375381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:36.011 [2024-12-05 19:40:29.380680] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.011 [2024-12-05 19:40:29.380818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.011 [2024-12-05 19:40:29.380838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:36.011 [2024-12-05 19:40:29.386046] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.011 [2024-12-05 19:40:29.386155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.011 [2024-12-05 19:40:29.386177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:36.011 [2024-12-05 19:40:29.391232] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.011 [2024-12-05 19:40:29.391349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.011 [2024-12-05 19:40:29.391371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:36.011 [2024-12-05 19:40:29.396526] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.011 [2024-12-05 19:40:29.396626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.011 [2024-12-05 19:40:29.396646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:36.011 [2024-12-05 19:40:29.401889] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.011 [2024-12-05 19:40:29.402001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.011 [2024-12-05 19:40:29.402022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:36.011 [2024-12-05 19:40:29.407318] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.011 [2024-12-05 19:40:29.407410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.011 [2024-12-05 19:40:29.407432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:36.011 [2024-12-05 19:40:29.412507] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.011 [2024-12-05 19:40:29.412610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.011 [2024-12-05 19:40:29.412630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:36.011 [2024-12-05 19:40:29.417676] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.011 [2024-12-05 19:40:29.417760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.011 [2024-12-05 19:40:29.417781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:36.011 [2024-12-05 19:40:29.422952] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.011 [2024-12-05 19:40:29.423067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.011 [2024-12-05 19:40:29.423088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:36.011 [2024-12-05 19:40:29.428287] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.011 [2024-12-05 19:40:29.428370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.011 [2024-12-05 19:40:29.428390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:36.011 [2024-12-05 19:40:29.433404] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.012 [2024-12-05 19:40:29.433473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.012 [2024-12-05 19:40:29.433494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:36.012 [2024-12-05 19:40:29.438772] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.012 [2024-12-05 19:40:29.438847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.012 [2024-12-05 19:40:29.438869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:36.012 [2024-12-05 19:40:29.444238] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.012 [2024-12-05 19:40:29.444332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.012 [2024-12-05 19:40:29.444354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:36.271 [2024-12-05 19:40:29.449702] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.272 [2024-12-05 19:40:29.449806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.272 [2024-12-05 19:40:29.449827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:36.272 [2024-12-05 19:40:29.455061] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.272 [2024-12-05 19:40:29.455177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.272 [2024-12-05 19:40:29.455199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:36.272 [2024-12-05 19:40:29.460406] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.272 [2024-12-05 19:40:29.460533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.272 [2024-12-05 19:40:29.460554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:36.272 [2024-12-05 19:40:29.465818] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.272 [2024-12-05 19:40:29.465892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.272 [2024-12-05 19:40:29.465931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:36.272 [2024-12-05 19:40:29.471202] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.272 [2024-12-05 19:40:29.471332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.272 [2024-12-05 19:40:29.471355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:36.272 [2024-12-05 19:40:29.476622] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.272 [2024-12-05 19:40:29.476737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.272 [2024-12-05 19:40:29.476761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:36.272 [2024-12-05 19:40:29.482020] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.272 [2024-12-05 19:40:29.482118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.272 [2024-12-05 19:40:29.482173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:36.272 [2024-12-05 19:40:29.487321] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.272 [2024-12-05 19:40:29.487466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.272 [2024-12-05 19:40:29.487488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:36.272 [2024-12-05 19:40:29.492596] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.272 [2024-12-05 19:40:29.492691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.272 [2024-12-05 19:40:29.492712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:36.272 [2024-12-05 19:40:29.497719] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.272 [2024-12-05 19:40:29.497813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.272 [2024-12-05 19:40:29.497834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:36.272 [2024-12-05 19:40:29.503097] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.272 [2024-12-05 19:40:29.503261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.272 [2024-12-05 19:40:29.503281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:36.272 [2024-12-05 19:40:29.508614] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.272 [2024-12-05 19:40:29.508715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.272 [2024-12-05 19:40:29.508736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:36.272 [2024-12-05 19:40:29.513910] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.272 [2024-12-05 19:40:29.513989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.272 [2024-12-05 19:40:29.514010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:36.272 [2024-12-05 19:40:29.519325] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.272 [2024-12-05 19:40:29.519432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.272 [2024-12-05 19:40:29.519454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:36.272 [2024-12-05 19:40:29.524593] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.272 [2024-12-05 19:40:29.524723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.272 [2024-12-05 19:40:29.524745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:36.272 [2024-12-05 19:40:29.529919] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.272 [2024-12-05 19:40:29.530030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.272 [2024-12-05 19:40:29.530051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:36.272 [2024-12-05 19:40:29.535212] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.272 [2024-12-05 19:40:29.535311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.272 [2024-12-05 19:40:29.535331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:36.272 [2024-12-05 19:40:29.540527] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.272 [2024-12-05 19:40:29.540614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.272 [2024-12-05 19:40:29.540635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:36.272 [2024-12-05 19:40:29.545775] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.272 [2024-12-05 19:40:29.545859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.272 [2024-12-05 19:40:29.545880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:36.272 [2024-12-05 19:40:29.550932] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.272 [2024-12-05 19:40:29.551034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.272 [2024-12-05 19:40:29.551055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:36.272 [2024-12-05 19:40:29.556292] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.272 [2024-12-05 19:40:29.556425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.272 [2024-12-05 19:40:29.556446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:36.272 [2024-12-05 19:40:29.561471] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.272 [2024-12-05 19:40:29.561590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.272 [2024-12-05 19:40:29.561611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:36.272 [2024-12-05 19:40:29.566570] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.272 [2024-12-05 19:40:29.566665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.272 [2024-12-05 19:40:29.566686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:36.272 [2024-12-05 19:40:29.571940] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.272 [2024-12-05 19:40:29.572019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.272 [2024-12-05 19:40:29.572042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:36.272 [2024-12-05 19:40:29.577417] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.272 [2024-12-05 19:40:29.577517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.272 [2024-12-05 19:40:29.577539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:36.272 [2024-12-05 19:40:29.582757] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.272 [2024-12-05 19:40:29.582857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.272 [2024-12-05 19:40:29.582877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:36.272 [2024-12-05 19:40:29.588145] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.272 [2024-12-05 19:40:29.588275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.273 [2024-12-05 19:40:29.588296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:36.273 [2024-12-05 19:40:29.593452] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.273 [2024-12-05 19:40:29.593544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.273 [2024-12-05 19:40:29.593566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:36.273 [2024-12-05 19:40:29.598680] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.273 [2024-12-05 19:40:29.598789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.273 [2024-12-05 19:40:29.598809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:36.273 [2024-12-05 19:40:29.603679] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.273 [2024-12-05 19:40:29.603823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.273 [2024-12-05 19:40:29.603843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:36.273 [2024-12-05 19:40:29.609079] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.273 [2024-12-05 19:40:29.609191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.273 [2024-12-05 19:40:29.609226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:36.273 [2024-12-05 19:40:29.614425] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.273 [2024-12-05 19:40:29.614509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.273 [2024-12-05 19:40:29.614531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:36.273 [2024-12-05 19:40:29.619728] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.273 [2024-12-05 19:40:29.619897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.273 [2024-12-05 19:40:29.619925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:36.273 [2024-12-05 19:40:29.625230] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.273 [2024-12-05 19:40:29.625330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.273 [2024-12-05 19:40:29.625352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:36.273 [2024-12-05 19:40:29.630608] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.273 [2024-12-05 19:40:29.630693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.273 [2024-12-05 19:40:29.630715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:36.273 [2024-12-05 19:40:29.635930] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.273 [2024-12-05 19:40:29.636004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.273 [2024-12-05 19:40:29.636027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:36.273 [2024-12-05 19:40:29.641399] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.273 [2024-12-05 19:40:29.641519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.273 [2024-12-05 19:40:29.641540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:36.273 [2024-12-05 19:40:29.646699] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.273 [2024-12-05 19:40:29.646831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.273 [2024-12-05 19:40:29.646854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:36.273 [2024-12-05 19:40:29.651998] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.273 [2024-12-05 19:40:29.652079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.273 [2024-12-05 19:40:29.652101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:36.273 [2024-12-05 19:40:29.657512] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.273 [2024-12-05 19:40:29.657614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.273 [2024-12-05 19:40:29.657635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:36.273 [2024-12-05 19:40:29.662977] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.273 [2024-12-05 19:40:29.663093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.273 [2024-12-05 19:40:29.663115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:36.273 [2024-12-05 19:40:29.668271] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.273 [2024-12-05 19:40:29.668349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.273 [2024-12-05 19:40:29.668372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:36.273 [2024-12-05 19:40:29.673469] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.273 [2024-12-05 19:40:29.673543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.273 [2024-12-05 19:40:29.673565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:36.273 [2024-12-05 19:40:29.678544] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.273 [2024-12-05 19:40:29.678655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.273 [2024-12-05 19:40:29.678677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:36.273 [2024-12-05 19:40:29.683645] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.273 [2024-12-05 19:40:29.683762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.273 [2024-12-05 19:40:29.683783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:36.273 [2024-12-05 19:40:29.688726] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.273 [2024-12-05 19:40:29.688820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.273 [2024-12-05 19:40:29.688843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:36.273 [2024-12-05 19:40:29.693666] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.273 [2024-12-05 19:40:29.693762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.273 [2024-12-05 19:40:29.693783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:36.273 [2024-12-05 19:40:29.698877] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.273 [2024-12-05 19:40:29.698952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.273 [2024-12-05 19:40:29.698974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:36.273 [2024-12-05 19:40:29.704043] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.273 [2024-12-05 19:40:29.704117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.273 [2024-12-05 19:40:29.704155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:36.533 [2024-12-05 19:40:29.709518] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.533 [2024-12-05 19:40:29.709604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.533 [2024-12-05 19:40:29.709626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:36.533 [2024-12-05 19:40:29.714567] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.533 [2024-12-05 19:40:29.714650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.533 [2024-12-05 19:40:29.714672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:36.533 [2024-12-05 19:40:29.719687] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.533 [2024-12-05 19:40:29.719759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.533 [2024-12-05 19:40:29.719781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:36.533 [2024-12-05 19:40:29.724905] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.533 [2024-12-05 19:40:29.725013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.533 [2024-12-05 19:40:29.725034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:36.533 [2024-12-05 19:40:29.730267] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.533 [2024-12-05 19:40:29.730372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.533 [2024-12-05 19:40:29.730395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:36.533 [2024-12-05 19:40:29.735390] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.533 [2024-12-05 19:40:29.735486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.533 [2024-12-05 19:40:29.735508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:36.533 [2024-12-05 19:40:29.740438] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.533 [2024-12-05 19:40:29.740556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.533 [2024-12-05 19:40:29.740577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:36.533 [2024-12-05 19:40:29.745740] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.533 [2024-12-05 19:40:29.745849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.533 [2024-12-05 19:40:29.745871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:36.533 [2024-12-05 19:40:29.750916] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.533 [2024-12-05 19:40:29.750989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.533 [2024-12-05 19:40:29.751011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:36.533 [2024-12-05 19:40:29.756146] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.533 [2024-12-05 19:40:29.756229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.534 [2024-12-05 19:40:29.756251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:36.534 [2024-12-05 19:40:29.761259] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.534 [2024-12-05 19:40:29.761348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.534 [2024-12-05 19:40:29.761370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:36.534 [2024-12-05 19:40:29.766465] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.534 [2024-12-05 19:40:29.766543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.534 [2024-12-05 19:40:29.766565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:36.534 [2024-12-05 19:40:29.771760] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.534 [2024-12-05 19:40:29.771876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.534 [2024-12-05 19:40:29.771909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:36.534 [2024-12-05 19:40:29.777053] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.534 [2024-12-05 19:40:29.777162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.534 [2024-12-05 19:40:29.777197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:36.534 [2024-12-05 19:40:29.782206] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.534 [2024-12-05 19:40:29.782293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.534 [2024-12-05 19:40:29.782315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:36.534 [2024-12-05 19:40:29.787474] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.534 [2024-12-05 19:40:29.787574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.534 [2024-12-05 19:40:29.787595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:36.534 [2024-12-05 19:40:29.792698] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.534 [2024-12-05 19:40:29.792798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.534 [2024-12-05 19:40:29.792819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:36.534 [2024-12-05 19:40:29.797997] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.534 [2024-12-05 19:40:29.798111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.534 [2024-12-05 19:40:29.798133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:36.534 [2024-12-05 19:40:29.803290] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.534 [2024-12-05 19:40:29.803384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.534 [2024-12-05 19:40:29.803406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:36.534 [2024-12-05 19:40:29.808515] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.534 [2024-12-05 19:40:29.808600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.534 [2024-12-05 19:40:29.808621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:36.534 [2024-12-05 19:40:29.813677] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.534 [2024-12-05 19:40:29.813798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.534 [2024-12-05 19:40:29.813821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:36.534 [2024-12-05 19:40:29.818827] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.534 [2024-12-05 19:40:29.818916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.534 [2024-12-05 19:40:29.818937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:36.534 [2024-12-05 19:40:29.823835] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.534 [2024-12-05 19:40:29.823961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.534 [2024-12-05 19:40:29.823983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:36.534 [2024-12-05 19:40:29.829116] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.534 [2024-12-05 19:40:29.829339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.534 [2024-12-05 19:40:29.829362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:36.534 [2024-12-05 19:40:29.834269] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.534 [2024-12-05 19:40:29.834384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.534 [2024-12-05 19:40:29.834405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:36.534 [2024-12-05 19:40:29.839298] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.534 [2024-12-05 19:40:29.839386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.534 [2024-12-05 19:40:29.839407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:36.534 [2024-12-05 19:40:29.844420] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.534 [2024-12-05 19:40:29.844537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.534 [2024-12-05 19:40:29.844559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:36.534 [2024-12-05 19:40:29.849408] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.534 [2024-12-05 19:40:29.849523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.534 [2024-12-05 19:40:29.849544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:36.534 [2024-12-05 19:40:29.854598] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.534 [2024-12-05 19:40:29.854672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.534 [2024-12-05 19:40:29.854694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:36.534 [2024-12-05 19:40:29.859920] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.534 [2024-12-05 19:40:29.859994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.534 [2024-12-05 19:40:29.860016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:36.534 [2024-12-05 19:40:29.865280] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.534 [2024-12-05 19:40:29.865364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.534 [2024-12-05 19:40:29.865386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:36.534 [2024-12-05 19:40:29.870287] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.534 [2024-12-05 19:40:29.870390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.534 [2024-12-05 19:40:29.870411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:36.534 [2024-12-05 19:40:29.875311] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.534 [2024-12-05 19:40:29.875414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.534 [2024-12-05 19:40:29.875435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:36.534 [2024-12-05 19:40:29.880742] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.534 [2024-12-05 19:40:29.880817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.534 [2024-12-05 19:40:29.880839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:36.534 [2024-12-05 19:40:29.885971] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.534 [2024-12-05 19:40:29.886098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.534 [2024-12-05 19:40:29.886136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:36.534 [2024-12-05 19:40:29.891293] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.534 [2024-12-05 19:40:29.891395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.534 [2024-12-05 19:40:29.891416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:36.534 [2024-12-05 19:40:29.896493] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.534 [2024-12-05 19:40:29.896617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.535 [2024-12-05 19:40:29.896637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:36.535 [2024-12-05 19:40:29.901607] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.535 [2024-12-05 19:40:29.901712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.535 [2024-12-05 19:40:29.901734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:36.535 [2024-12-05 19:40:29.906658] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.535 [2024-12-05 19:40:29.906776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.535 [2024-12-05 19:40:29.906795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:36.535 [2024-12-05 19:40:29.911803] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.535 [2024-12-05 19:40:29.911931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.535 [2024-12-05 19:40:29.911953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:36.535 [2024-12-05 19:40:29.917075] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.535 [2024-12-05 19:40:29.917183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.535 [2024-12-05 19:40:29.917205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:36.535 [2024-12-05 19:40:29.922276] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.535 [2024-12-05 19:40:29.922377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.535 [2024-12-05 19:40:29.922398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:36.535 [2024-12-05 19:40:29.927508] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.535 [2024-12-05 19:40:29.927611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.535 [2024-12-05 19:40:29.927632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:36.535 [2024-12-05 19:40:29.932742] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.535 [2024-12-05 19:40:29.932852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.535 [2024-12-05 19:40:29.932873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:36.535 [2024-12-05 19:40:29.937981] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.535 [2024-12-05 19:40:29.938095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.535 [2024-12-05 19:40:29.938117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:36.535 [2024-12-05 19:40:29.943291] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.535 [2024-12-05 19:40:29.943405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.535 [2024-12-05 19:40:29.943427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:36.535 [2024-12-05 19:40:29.948494] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.535 [2024-12-05 19:40:29.948607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.535 [2024-12-05 19:40:29.948643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:36.535 [2024-12-05 19:40:29.953755] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.535 [2024-12-05 19:40:29.953876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.535 [2024-12-05 19:40:29.953898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:36.535 [2024-12-05 19:40:29.959209] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.535 [2024-12-05 19:40:29.959334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.535 [2024-12-05 19:40:29.959356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:36.535 [2024-12-05 19:40:29.964306] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.535 [2024-12-05 19:40:29.964416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.535 [2024-12-05 19:40:29.964438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:36.795 [2024-12-05 19:40:29.969491] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.795 [2024-12-05 19:40:29.969574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.795 [2024-12-05 19:40:29.969596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:36.795 [2024-12-05 19:40:29.974857] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.795 [2024-12-05 19:40:29.974969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.795 [2024-12-05 19:40:29.974990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:36.795 [2024-12-05 19:40:29.980107] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.795 [2024-12-05 19:40:29.980194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.795 [2024-12-05 19:40:29.980231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:36.795 [2024-12-05 19:40:29.985321] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.795 [2024-12-05 19:40:29.985419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.795 [2024-12-05 19:40:29.985440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:36.795 [2024-12-05 19:40:29.990632] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.795 [2024-12-05 19:40:29.990752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.795 [2024-12-05 19:40:29.990788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:36.795 [2024-12-05 19:40:29.995824] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.795 [2024-12-05 19:40:29.995966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.795 [2024-12-05 19:40:29.995987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:36.795 [2024-12-05 19:40:30.001001] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.795 [2024-12-05 19:40:30.001097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.795 [2024-12-05 19:40:30.001117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:36.795 [2024-12-05 19:40:30.006228] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.795 [2024-12-05 19:40:30.006327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.795 [2024-12-05 19:40:30.006347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:36.795 [2024-12-05 19:40:30.011423] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.795 [2024-12-05 19:40:30.011497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.795 [2024-12-05 19:40:30.011518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:36.795 [2024-12-05 19:40:30.016655] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.795 [2024-12-05 19:40:30.016756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.795 [2024-12-05 19:40:30.016776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:36.795 [2024-12-05 19:40:30.022042] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.795 [2024-12-05 19:40:30.022202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.795 [2024-12-05 19:40:30.022224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:36.795 [2024-12-05 19:40:30.027309] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.795 [2024-12-05 19:40:30.027426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.795 [2024-12-05 19:40:30.027446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:36.795 [2024-12-05 19:40:30.032517] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.795 [2024-12-05 19:40:30.032618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.795 [2024-12-05 19:40:30.032638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:36.795 [2024-12-05 19:40:30.037827] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.795 [2024-12-05 19:40:30.037930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.795 [2024-12-05 19:40:30.037950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:36.795 [2024-12-05 19:40:30.043039] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.795 [2024-12-05 19:40:30.043149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.795 [2024-12-05 19:40:30.043169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:36.795 [2024-12-05 19:40:30.048287] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.795 [2024-12-05 19:40:30.048406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.795 [2024-12-05 19:40:30.048427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:36.795 [2024-12-05 19:40:30.053641] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.795 [2024-12-05 19:40:30.053747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.795 [2024-12-05 19:40:30.053767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:36.795 [2024-12-05 19:40:30.058753] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.795 [2024-12-05 19:40:30.058885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.795 [2024-12-05 19:40:30.058907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:36.795 [2024-12-05 19:40:30.064117] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.795 [2024-12-05 19:40:30.064217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.795 [2024-12-05 19:40:30.064253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:36.795 [2024-12-05 19:40:30.069335] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.795 [2024-12-05 19:40:30.069458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.795 [2024-12-05 19:40:30.069478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:36.795 [2024-12-05 19:40:30.074642] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.796 [2024-12-05 19:40:30.074750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.796 [2024-12-05 19:40:30.074804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:36.796 [2024-12-05 19:40:30.080051] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.796 [2024-12-05 19:40:30.080238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.796 [2024-12-05 19:40:30.080258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:36.796 [2024-12-05 19:40:30.085348] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.796 [2024-12-05 19:40:30.085458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.796 [2024-12-05 19:40:30.085479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:36.796 [2024-12-05 19:40:30.090497] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.796 [2024-12-05 19:40:30.090582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.796 [2024-12-05 19:40:30.090602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:36.796 [2024-12-05 19:40:30.095525] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.796 [2024-12-05 19:40:30.095637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.796 [2024-12-05 19:40:30.095672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:36.796 [2024-12-05 19:40:30.100760] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.796 [2024-12-05 19:40:30.100861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.796 [2024-12-05 19:40:30.100882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:36.796 [2024-12-05 19:40:30.105907] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.796 [2024-12-05 19:40:30.106002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.796 [2024-12-05 19:40:30.106022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:36.796 [2024-12-05 19:40:30.110889] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.796 [2024-12-05 19:40:30.111001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.796 [2024-12-05 19:40:30.111022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:36.796 [2024-12-05 19:40:30.116060] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.796 [2024-12-05 19:40:30.116195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.796 [2024-12-05 19:40:30.116217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:36.796 [2024-12-05 19:40:30.121349] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.796 [2024-12-05 19:40:30.121457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.796 [2024-12-05 19:40:30.121478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:36.796 [2024-12-05 19:40:30.126442] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.796 [2024-12-05 19:40:30.126511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.796 [2024-12-05 19:40:30.126532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:36.796 [2024-12-05 19:40:30.131517] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.796 [2024-12-05 19:40:30.131617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.796 [2024-12-05 19:40:30.131653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:36.796 [2024-12-05 19:40:30.136836] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.796 [2024-12-05 19:40:30.136939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.796 [2024-12-05 19:40:30.136960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:36.796 [2024-12-05 19:40:30.142322] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.796 [2024-12-05 19:40:30.142432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.796 [2024-12-05 19:40:30.142452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:36.796 [2024-12-05 19:40:30.147432] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.796 [2024-12-05 19:40:30.147533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.796 [2024-12-05 19:40:30.147553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:36.796 [2024-12-05 19:40:30.152628] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.796 [2024-12-05 19:40:30.152741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.796 [2024-12-05 19:40:30.152761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:36.796 [2024-12-05 19:40:30.157790] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.796 [2024-12-05 19:40:30.157890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.796 [2024-12-05 19:40:30.157910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:36.796 [2024-12-05 19:40:30.163006] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.796 [2024-12-05 19:40:30.163116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.796 [2024-12-05 19:40:30.163137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:36.796 [2024-12-05 19:40:30.168117] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.796 [2024-12-05 19:40:30.168205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.796 [2024-12-05 19:40:30.168241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:36.796 [2024-12-05 19:40:30.173277] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.796 [2024-12-05 19:40:30.173387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.796 [2024-12-05 19:40:30.173408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:36.796 [2024-12-05 19:40:30.178419] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.796 [2024-12-05 19:40:30.178506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.796 [2024-12-05 19:40:30.178528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:36.796 [2024-12-05 19:40:30.183449] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.796 [2024-12-05 19:40:30.183545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.796 [2024-12-05 19:40:30.183566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:36.796 [2024-12-05 19:40:30.188544] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.796 [2024-12-05 19:40:30.188643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.796 [2024-12-05 19:40:30.188664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:36.797 [2024-12-05 19:40:30.193688] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.797 [2024-12-05 19:40:30.193814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.797 [2024-12-05 19:40:30.193835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:36.797 [2024-12-05 19:40:30.198962] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.797 [2024-12-05 19:40:30.199062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.797 [2024-12-05 19:40:30.199083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:36.797 [2024-12-05 19:40:30.204025] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.797 [2024-12-05 19:40:30.204130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.797 [2024-12-05 19:40:30.204165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:36.797 5870.00 IOPS, 733.75 MiB/s [2024-12-05T19:40:30.234Z] [2024-12-05 19:40:30.209912] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbd4dc0) with pdu=0x200016eff3c8 00:20:36.797 [2024-12-05 19:40:30.210012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.797 [2024-12-05 19:40:30.210034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:36.797 00:20:36.797 Latency(us) 00:20:36.797 [2024-12-05T19:40:30.234Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:36.797 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:20:36.797 nvme0n1 : 2.00 5867.32 733.42 0.00 0.00 2720.79 2055.45 11617.75 00:20:36.797 [2024-12-05T19:40:30.234Z] =================================================================================================================== 00:20:36.797 [2024-12-05T19:40:30.234Z] Total : 5867.32 733.42 0.00 0.00 2720.79 2055.45 11617.75 00:20:36.797 { 00:20:36.797 "results": [ 00:20:36.797 { 00:20:36.797 "job": "nvme0n1", 00:20:36.797 "core_mask": "0x2", 00:20:36.797 "workload": "randwrite", 00:20:36.797 "status": "finished", 00:20:36.797 "queue_depth": 16, 00:20:36.797 "io_size": 131072, 00:20:36.797 "runtime": 2.00347, 00:20:36.797 "iops": 5867.320199453948, 00:20:36.797 "mibps": 733.4150249317435, 00:20:36.797 "io_failed": 0, 00:20:36.797 "io_timeout": 0, 00:20:36.797 "avg_latency_us": 2720.787441784927, 00:20:36.797 "min_latency_us": 2055.447272727273, 00:20:36.797 "max_latency_us": 11617.745454545455 00:20:36.797 } 00:20:36.797 ], 00:20:36.797 "core_count": 1 00:20:36.797 } 00:20:37.056 19:40:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:20:37.056 19:40:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:20:37.056 19:40:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:20:37.056 | .driver_specific 00:20:37.056 | .nvme_error 00:20:37.056 | .status_code 00:20:37.056 | .command_transient_transport_error' 00:20:37.056 19:40:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:20:37.056 19:40:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 380 > 0 )) 00:20:37.056 19:40:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80407 00:20:37.314 19:40:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 80407 ']' 00:20:37.314 19:40:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 80407 00:20:37.314 19:40:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:20:37.314 19:40:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:37.314 19:40:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80407 00:20:37.314 19:40:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:37.314 killing process with pid 80407 00:20:37.314 19:40:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:37.314 19:40:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80407' 00:20:37.314 Received shutdown signal, test time was about 2.000000 seconds 00:20:37.314 00:20:37.314 Latency(us) 00:20:37.314 [2024-12-05T19:40:30.751Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:37.314 [2024-12-05T19:40:30.751Z] =================================================================================================================== 00:20:37.314 [2024-12-05T19:40:30.751Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:37.314 19:40:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 80407 00:20:37.314 19:40:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 80407 00:20:37.314 19:40:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 80205 00:20:37.314 19:40:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 80205 ']' 00:20:37.314 19:40:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 80205 00:20:37.314 19:40:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:20:37.314 19:40:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:37.314 19:40:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80205 00:20:37.571 19:40:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:37.571 killing process with pid 80205 00:20:37.571 19:40:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:37.571 19:40:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80205' 00:20:37.571 19:40:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 80205 00:20:37.572 19:40:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 80205 00:20:37.572 00:20:37.572 real 0m17.283s 00:20:37.572 user 0m34.085s 00:20:37.572 sys 0m4.671s 00:20:37.572 19:40:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:37.572 ************************************ 00:20:37.572 END TEST nvmf_digest_error 00:20:37.572 ************************************ 00:20:37.572 19:40:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:37.830 19:40:31 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:20:37.830 19:40:31 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:20:37.830 19:40:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:37.830 19:40:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:20:37.830 19:40:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:37.830 19:40:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:20:37.830 19:40:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:37.830 19:40:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:37.830 rmmod nvme_tcp 00:20:37.830 rmmod nvme_fabrics 00:20:37.830 rmmod nvme_keyring 00:20:37.830 19:40:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:37.830 19:40:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:20:37.830 19:40:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:20:37.830 19:40:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 80205 ']' 00:20:37.830 19:40:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 80205 00:20:37.830 19:40:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 80205 ']' 00:20:37.830 19:40:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 80205 00:20:37.830 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (80205) - No such process 00:20:37.830 Process with pid 80205 is not found 00:20:37.830 19:40:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 80205 is not found' 00:20:37.830 19:40:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:37.830 19:40:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:37.830 19:40:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:37.830 19:40:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:20:37.830 19:40:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:20:37.830 19:40:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:37.830 19:40:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:20:37.830 19:40:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:37.830 19:40:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:37.830 19:40:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:37.830 19:40:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:37.830 19:40:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:37.830 19:40:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:37.830 19:40:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:37.830 19:40:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:37.830 19:40:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:37.830 19:40:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:37.830 19:40:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:37.830 19:40:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:38.087 19:40:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:38.088 19:40:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:38.088 19:40:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:38.088 19:40:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:38.088 19:40:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:38.088 19:40:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:38.088 19:40:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:38.088 19:40:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@300 -- # return 0 00:20:38.088 ************************************ 00:20:38.088 END TEST nvmf_digest 00:20:38.088 ************************************ 00:20:38.088 00:20:38.088 real 0m33.907s 00:20:38.088 user 1m4.703s 00:20:38.088 sys 0m9.576s 00:20:38.088 19:40:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:38.088 19:40:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:20:38.088 19:40:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:20:38.088 19:40:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 1 -eq 1 ]] 00:20:38.088 19:40:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@42 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:20:38.088 19:40:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:38.088 19:40:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:38.088 19:40:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:38.088 ************************************ 00:20:38.088 START TEST nvmf_host_multipath 00:20:38.088 ************************************ 00:20:38.088 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:20:38.088 * Looking for test storage... 00:20:38.088 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:38.088 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:38.088 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:20:38.088 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:38.347 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:38.347 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:38.347 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:38.347 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:38.347 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:20:38.348 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:20:38.348 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:20:38.348 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:20:38.348 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:20:38.348 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:20:38.348 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:20:38.348 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:38.348 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@344 -- # case "$op" in 00:20:38.348 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@345 -- # : 1 00:20:38.348 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:38.348 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:38.348 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # decimal 1 00:20:38.348 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=1 00:20:38.348 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:38.348 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 1 00:20:38.348 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:20:38.348 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # decimal 2 00:20:38.348 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=2 00:20:38.348 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:38.348 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 2 00:20:38.348 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:20:38.348 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:38.348 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:38.348 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # return 0 00:20:38.348 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:38.348 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:38.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:38.348 --rc genhtml_branch_coverage=1 00:20:38.348 --rc genhtml_function_coverage=1 00:20:38.348 --rc genhtml_legend=1 00:20:38.348 --rc geninfo_all_blocks=1 00:20:38.348 --rc geninfo_unexecuted_blocks=1 00:20:38.348 00:20:38.348 ' 00:20:38.348 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:38.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:38.348 --rc genhtml_branch_coverage=1 00:20:38.348 --rc genhtml_function_coverage=1 00:20:38.348 --rc genhtml_legend=1 00:20:38.348 --rc geninfo_all_blocks=1 00:20:38.348 --rc geninfo_unexecuted_blocks=1 00:20:38.348 00:20:38.348 ' 00:20:38.348 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:38.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:38.348 --rc genhtml_branch_coverage=1 00:20:38.348 --rc genhtml_function_coverage=1 00:20:38.348 --rc genhtml_legend=1 00:20:38.348 --rc geninfo_all_blocks=1 00:20:38.348 --rc geninfo_unexecuted_blocks=1 00:20:38.348 00:20:38.348 ' 00:20:38.348 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:38.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:38.348 --rc genhtml_branch_coverage=1 00:20:38.348 --rc genhtml_function_coverage=1 00:20:38.348 --rc genhtml_legend=1 00:20:38.348 --rc geninfo_all_blocks=1 00:20:38.348 --rc geninfo_unexecuted_blocks=1 00:20:38.348 00:20:38.348 ' 00:20:38.348 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:38.348 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:20:38.348 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:38.348 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:38.348 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:38.348 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:38.348 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:38.348 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:38.348 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:38.348 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:38.348 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:38.348 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:38.348 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:20:38.348 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:20:38.348 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:38.348 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:38.348 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:38.348 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:38.348 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:38.348 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:20:38.348 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:38.348 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:38.348 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:38.348 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:38.348 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:38.348 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:38.348 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:20:38.348 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:38.348 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@51 -- # : 0 00:20:38.348 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:38.348 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:38.348 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:38.348 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:38.348 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:38.348 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:38.348 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:38.348 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:38.348 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:38.348 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:38.348 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:38.348 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:38.348 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:38.348 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:20:38.348 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:38.348 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:20:38.348 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:20:38.348 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:38.348 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:38.348 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:38.348 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:38.348 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:38.348 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:38.349 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:38.349 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:38.349 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:20:38.349 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:20:38.349 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:20:38.349 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:20:38.349 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:20:38.349 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:20:38.349 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:38.349 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:38.349 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:38.349 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:38.349 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:38.349 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:38.349 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:38.349 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:38.349 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:38.349 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:38.349 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:38.349 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:38.349 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:38.349 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:38.349 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:38.349 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:38.349 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:38.349 Cannot find device "nvmf_init_br" 00:20:38.349 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:20:38.349 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:38.349 Cannot find device "nvmf_init_br2" 00:20:38.349 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:20:38.349 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:38.349 Cannot find device "nvmf_tgt_br" 00:20:38.349 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # true 00:20:38.349 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:38.349 Cannot find device "nvmf_tgt_br2" 00:20:38.349 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # true 00:20:38.349 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:38.349 Cannot find device "nvmf_init_br" 00:20:38.349 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # true 00:20:38.349 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:38.349 Cannot find device "nvmf_init_br2" 00:20:38.349 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # true 00:20:38.349 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:38.349 Cannot find device "nvmf_tgt_br" 00:20:38.349 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # true 00:20:38.349 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:38.349 Cannot find device "nvmf_tgt_br2" 00:20:38.349 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # true 00:20:38.349 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:38.349 Cannot find device "nvmf_br" 00:20:38.349 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # true 00:20:38.349 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:38.608 Cannot find device "nvmf_init_if" 00:20:38.608 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # true 00:20:38.608 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:38.608 Cannot find device "nvmf_init_if2" 00:20:38.608 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # true 00:20:38.608 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:38.608 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:38.608 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # true 00:20:38.608 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:38.608 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:38.608 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # true 00:20:38.608 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:38.608 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:38.608 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:38.609 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:38.609 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:38.609 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:38.609 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:38.609 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:38.609 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:38.609 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:38.609 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:38.609 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:38.609 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:38.609 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:38.609 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:38.609 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:38.609 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:38.609 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:38.609 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:38.609 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:38.609 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:38.609 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:38.609 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:38.609 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:38.609 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:38.609 19:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:38.609 19:40:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:38.609 19:40:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:38.609 19:40:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:38.609 19:40:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:38.609 19:40:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:38.609 19:40:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:38.609 19:40:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:38.609 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:38.609 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.093 ms 00:20:38.609 00:20:38.609 --- 10.0.0.3 ping statistics --- 00:20:38.609 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:38.609 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:20:38.609 19:40:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:38.609 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:38.609 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.065 ms 00:20:38.609 00:20:38.609 --- 10.0.0.4 ping statistics --- 00:20:38.609 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:38.609 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:20:38.609 19:40:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:38.609 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:38.609 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:20:38.609 00:20:38.609 --- 10.0.0.1 ping statistics --- 00:20:38.609 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:38.609 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:20:38.609 19:40:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:38.609 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:38.609 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:20:38.609 00:20:38.609 --- 10.0.0.2 ping statistics --- 00:20:38.609 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:38.609 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:20:38.609 19:40:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:38.609 19:40:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@461 -- # return 0 00:20:38.609 19:40:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:38.609 19:40:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:38.609 19:40:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:38.609 19:40:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:38.609 19:40:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:38.609 19:40:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:38.609 19:40:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:38.868 19:40:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:20:38.868 19:40:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:38.868 19:40:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:38.868 19:40:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:20:38.868 19:40:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@509 -- # nvmfpid=80730 00:20:38.868 19:40:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@510 -- # waitforlisten 80730 00:20:38.869 19:40:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:20:38.869 19:40:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 80730 ']' 00:20:38.869 19:40:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:38.869 19:40:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:38.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:38.869 19:40:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:38.869 19:40:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:38.869 19:40:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:20:38.869 [2024-12-05 19:40:32.128348] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:20:38.869 [2024-12-05 19:40:32.128473] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:38.869 [2024-12-05 19:40:32.274649] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:39.127 [2024-12-05 19:40:32.331253] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:39.127 [2024-12-05 19:40:32.331345] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:39.127 [2024-12-05 19:40:32.331371] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:39.127 [2024-12-05 19:40:32.331379] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:39.127 [2024-12-05 19:40:32.331386] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:39.127 [2024-12-05 19:40:32.332745] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:39.127 [2024-12-05 19:40:32.332756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:39.127 [2024-12-05 19:40:32.392483] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:39.127 19:40:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:39.127 19:40:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0 00:20:39.127 19:40:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:39.127 19:40:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:39.127 19:40:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:20:39.127 19:40:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:39.127 19:40:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=80730 00:20:39.127 19:40:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:39.386 [2024-12-05 19:40:32.795043] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:39.386 19:40:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:20:39.953 Malloc0 00:20:39.953 19:40:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:20:40.213 19:40:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:40.471 19:40:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:40.731 [2024-12-05 19:40:33.973863] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:40.731 19:40:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:20:40.989 [2024-12-05 19:40:34.221935] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:20:40.989 19:40:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=80774 00:20:40.989 19:40:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:20:40.989 19:40:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:40.989 19:40:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 80774 /var/tmp/bdevperf.sock 00:20:40.989 19:40:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 80774 ']' 00:20:40.989 19:40:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:40.989 19:40:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:40.989 19:40:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:40.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:40.989 19:40:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:40.989 19:40:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:20:42.364 19:40:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:42.364 19:40:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0 00:20:42.364 19:40:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:20:42.364 19:40:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:20:42.622 Nvme0n1 00:20:42.622 19:40:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:20:43.188 Nvme0n1 00:20:43.188 19:40:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:20:43.188 19:40:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:20:44.123 19:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:20:44.123 19:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:20:44.382 19:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:20:44.950 19:40:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:20:44.950 19:40:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80829 00:20:44.950 19:40:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80730 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:20:44.950 19:40:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:20:51.520 19:40:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:20:51.520 19:40:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:20:51.520 19:40:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:20:51.520 19:40:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:51.520 Attaching 4 probes... 00:20:51.520 @path[10.0.0.3, 4421]: 17050 00:20:51.520 @path[10.0.0.3, 4421]: 17451 00:20:51.520 @path[10.0.0.3, 4421]: 17720 00:20:51.520 @path[10.0.0.3, 4421]: 18213 00:20:51.520 @path[10.0.0.3, 4421]: 19542 00:20:51.520 19:40:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:20:51.520 19:40:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:20:51.520 19:40:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:20:51.520 19:40:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:20:51.520 19:40:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:20:51.520 19:40:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:20:51.520 19:40:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80829 00:20:51.520 19:40:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:51.520 19:40:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:20:51.520 19:40:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:20:51.520 19:40:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:20:51.520 19:40:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:20:51.520 19:40:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80944 00:20:51.520 19:40:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80730 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:20:51.520 19:40:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:20:58.103 19:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:20:58.103 19:40:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:20:58.103 19:40:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:20:58.103 19:40:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:58.103 Attaching 4 probes... 00:20:58.103 @path[10.0.0.3, 4420]: 18101 00:20:58.103 @path[10.0.0.3, 4420]: 19282 00:20:58.103 @path[10.0.0.3, 4420]: 19319 00:20:58.103 @path[10.0.0.3, 4420]: 18813 00:20:58.103 @path[10.0.0.3, 4420]: 18389 00:20:58.103 19:40:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:20:58.103 19:40:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:20:58.103 19:40:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:20:58.103 19:40:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:20:58.103 19:40:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:20:58.103 19:40:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:20:58.103 19:40:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80944 00:20:58.103 19:40:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:58.103 19:40:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:20:58.103 19:40:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:20:58.361 19:40:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:20:58.629 19:40:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:20:58.629 19:40:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81056 00:20:58.629 19:40:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:20:58.629 19:40:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80730 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:05.208 19:40:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:05.208 19:40:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:21:05.208 19:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:21:05.208 19:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:05.208 Attaching 4 probes... 00:21:05.208 @path[10.0.0.3, 4421]: 13628 00:21:05.208 @path[10.0.0.3, 4421]: 18418 00:21:05.208 @path[10.0.0.3, 4421]: 17635 00:21:05.208 @path[10.0.0.3, 4421]: 17506 00:21:05.208 @path[10.0.0.3, 4421]: 17908 00:21:05.208 19:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:05.208 19:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:21:05.208 19:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:05.208 19:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:21:05.208 19:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:21:05.208 19:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:21:05.208 19:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81056 00:21:05.208 19:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:05.208 19:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:21:05.208 19:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:21:05.208 19:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:21:05.466 19:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:21:05.466 19:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81174 00:21:05.466 19:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:05.466 19:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80730 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:12.087 19:41:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:12.087 19:41:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:21:12.087 19:41:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:21:12.087 19:41:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:12.087 Attaching 4 probes... 00:21:12.087 00:21:12.087 00:21:12.087 00:21:12.087 00:21:12.087 00:21:12.087 19:41:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:12.087 19:41:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:12.087 19:41:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:21:12.087 19:41:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:21:12.087 19:41:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:21:12.087 19:41:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:21:12.087 19:41:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81174 00:21:12.087 19:41:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:12.087 19:41:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:21:12.087 19:41:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:21:12.087 19:41:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:21:12.344 19:41:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:21:12.344 19:41:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81291 00:21:12.344 19:41:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:12.344 19:41:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80730 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:18.909 19:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:18.909 19:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:21:18.909 19:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:21:18.909 19:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:18.909 Attaching 4 probes... 00:21:18.909 @path[10.0.0.3, 4421]: 17028 00:21:18.909 @path[10.0.0.3, 4421]: 18008 00:21:18.909 @path[10.0.0.3, 4421]: 17725 00:21:18.909 @path[10.0.0.3, 4421]: 18207 00:21:18.909 @path[10.0.0.3, 4421]: 18143 00:21:18.909 19:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:18.909 19:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:21:18.909 19:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:18.909 19:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:21:18.909 19:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:21:18.909 19:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:21:18.909 19:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81291 00:21:18.909 19:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:18.909 19:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:21:18.909 19:41:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:21:19.914 19:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:21:19.914 19:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81410 00:21:19.914 19:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:19.914 19:41:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80730 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:26.468 19:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:26.468 19:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:21:26.468 19:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:21:26.468 19:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:26.468 Attaching 4 probes... 00:21:26.468 @path[10.0.0.3, 4420]: 17389 00:21:26.468 @path[10.0.0.3, 4420]: 18054 00:21:26.468 @path[10.0.0.3, 4420]: 17783 00:21:26.468 @path[10.0.0.3, 4420]: 18378 00:21:26.468 @path[10.0.0.3, 4420]: 18292 00:21:26.468 19:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:26.468 19:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:21:26.468 19:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:26.468 19:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:21:26.468 19:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:21:26.468 19:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:21:26.468 19:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81410 00:21:26.468 19:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:26.468 19:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:21:26.468 [2024-12-05 19:41:19.691192] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:21:26.468 19:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:21:26.727 19:41:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:21:33.292 19:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:21:33.292 19:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81589 00:21:33.292 19:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80730 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:33.292 19:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:38.559 19:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:38.559 19:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:21:39.135 19:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:21:39.135 19:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:39.135 Attaching 4 probes... 00:21:39.135 @path[10.0.0.3, 4421]: 17787 00:21:39.135 @path[10.0.0.3, 4421]: 18443 00:21:39.135 @path[10.0.0.3, 4421]: 18224 00:21:39.135 @path[10.0.0.3, 4421]: 18344 00:21:39.135 @path[10.0.0.3, 4421]: 18612 00:21:39.135 19:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:39.135 19:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:39.135 19:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:21:39.135 19:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:21:39.135 19:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:21:39.135 19:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:21:39.135 19:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81589 00:21:39.135 19:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:39.135 19:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 80774 00:21:39.135 19:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 80774 ']' 00:21:39.135 19:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 80774 00:21:39.135 19:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname 00:21:39.135 19:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:39.135 19:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80774 00:21:39.135 19:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:39.135 19:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:39.135 killing process with pid 80774 00:21:39.135 19:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80774' 00:21:39.135 19:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 80774 00:21:39.135 19:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 80774 00:21:39.135 { 00:21:39.135 "results": [ 00:21:39.135 { 00:21:39.135 "job": "Nvme0n1", 00:21:39.135 "core_mask": "0x4", 00:21:39.135 "workload": "verify", 00:21:39.135 "status": "terminated", 00:21:39.135 "verify_range": { 00:21:39.135 "start": 0, 00:21:39.135 "length": 16384 00:21:39.135 }, 00:21:39.135 "queue_depth": 128, 00:21:39.135 "io_size": 4096, 00:21:39.135 "runtime": 55.758892, 00:21:39.135 "iops": 7697.01449591215, 00:21:39.135 "mibps": 30.066462874656835, 00:21:39.135 "io_failed": 0, 00:21:39.135 "io_timeout": 0, 00:21:39.135 "avg_latency_us": 16598.20096779735, 00:21:39.135 "min_latency_us": 1347.9563636363637, 00:21:39.135 "max_latency_us": 7046430.72 00:21:39.135 } 00:21:39.135 ], 00:21:39.135 "core_count": 1 00:21:39.135 } 00:21:39.135 19:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 80774 00:21:39.135 19:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:21:39.135 [2024-12-05 19:40:34.287717] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:21:39.135 [2024-12-05 19:40:34.287809] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80774 ] 00:21:39.136 [2024-12-05 19:40:34.433115] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:39.136 [2024-12-05 19:40:34.492397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:39.136 [2024-12-05 19:40:34.549865] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:39.136 Running I/O for 90 seconds... 00:21:39.136 6804.00 IOPS, 26.58 MiB/s [2024-12-05T19:41:32.573Z] 7695.50 IOPS, 30.06 MiB/s [2024-12-05T19:41:32.573Z] 8048.33 IOPS, 31.44 MiB/s [2024-12-05T19:41:32.573Z] 8214.25 IOPS, 32.09 MiB/s [2024-12-05T19:41:32.573Z] 8344.20 IOPS, 32.59 MiB/s [2024-12-05T19:41:32.573Z] 8462.83 IOPS, 33.06 MiB/s [2024-12-05T19:41:32.573Z] 8640.14 IOPS, 33.75 MiB/s [2024-12-05T19:41:32.573Z] 8762.12 IOPS, 34.23 MiB/s [2024-12-05T19:41:32.573Z] [2024-12-05 19:40:44.915328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:61064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.136 [2024-12-05 19:40:44.915388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:39.136 [2024-12-05 19:40:44.915441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:61072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.136 [2024-12-05 19:40:44.915461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:39.136 [2024-12-05 19:40:44.915482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:61080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.136 [2024-12-05 19:40:44.915496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:39.136 [2024-12-05 19:40:44.915530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:61088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.136 [2024-12-05 19:40:44.915544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:39.136 [2024-12-05 19:40:44.915563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:61096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.136 [2024-12-05 19:40:44.915577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:39.136 [2024-12-05 19:40:44.915612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:61104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.136 [2024-12-05 19:40:44.915643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:39.136 [2024-12-05 19:40:44.915663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:61112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.136 [2024-12-05 19:40:44.915677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:39.136 [2024-12-05 19:40:44.915697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:61120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.136 [2024-12-05 19:40:44.915711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:39.136 [2024-12-05 19:40:44.915730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:60680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.136 [2024-12-05 19:40:44.915745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:39.136 [2024-12-05 19:40:44.915764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:60688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.136 [2024-12-05 19:40:44.915799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:39.136 [2024-12-05 19:40:44.915821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:60696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.136 [2024-12-05 19:40:44.915835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:39.136 [2024-12-05 19:40:44.915855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:60704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.136 [2024-12-05 19:40:44.915869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:39.136 [2024-12-05 19:40:44.915888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:60712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.136 [2024-12-05 19:40:44.915931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:39.136 [2024-12-05 19:40:44.915954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:60720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.136 [2024-12-05 19:40:44.915970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:39.136 [2024-12-05 19:40:44.916001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:60728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.136 [2024-12-05 19:40:44.916016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:39.136 [2024-12-05 19:40:44.916037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:60736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.136 [2024-12-05 19:40:44.916052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:39.136 [2024-12-05 19:40:44.916074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:60744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.136 [2024-12-05 19:40:44.916089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:39.136 [2024-12-05 19:40:44.916110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:60752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.136 [2024-12-05 19:40:44.916141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:39.136 [2024-12-05 19:40:44.916167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:60760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.136 [2024-12-05 19:40:44.916214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:39.136 [2024-12-05 19:40:44.916249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:60768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.136 [2024-12-05 19:40:44.916264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:39.136 [2024-12-05 19:40:44.916283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:60776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.136 [2024-12-05 19:40:44.916311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:39.136 [2024-12-05 19:40:44.916330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:60784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.136 [2024-12-05 19:40:44.916352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:39.136 [2024-12-05 19:40:44.916372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:60792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.136 [2024-12-05 19:40:44.916387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:39.136 [2024-12-05 19:40:44.916405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:60800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.136 [2024-12-05 19:40:44.916419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:39.136 [2024-12-05 19:40:44.916438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:60808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.136 [2024-12-05 19:40:44.916452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:39.136 [2024-12-05 19:40:44.916471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:60816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.136 [2024-12-05 19:40:44.916485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:39.136 [2024-12-05 19:40:44.916503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:60824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.136 [2024-12-05 19:40:44.916517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:39.136 [2024-12-05 19:40:44.916536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:60832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.136 [2024-12-05 19:40:44.916550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:39.136 [2024-12-05 19:40:44.916569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:60840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.136 [2024-12-05 19:40:44.916583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:39.136 [2024-12-05 19:40:44.916635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:60848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.136 [2024-12-05 19:40:44.916667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:39.136 [2024-12-05 19:40:44.916687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:60856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.136 [2024-12-05 19:40:44.916703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:39.136 [2024-12-05 19:40:44.916723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:60864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.136 [2024-12-05 19:40:44.916739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:39.136 [2024-12-05 19:40:44.916865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:61128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.136 [2024-12-05 19:40:44.916890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:39.136 [2024-12-05 19:40:44.916914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:61136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.136 [2024-12-05 19:40:44.916930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:39.136 [2024-12-05 19:40:44.916992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:61144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.136 [2024-12-05 19:40:44.917007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:39.136 [2024-12-05 19:40:44.917026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:61152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.137 [2024-12-05 19:40:44.917041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:39.137 [2024-12-05 19:40:44.917060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:61160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.137 [2024-12-05 19:40:44.917074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:39.137 [2024-12-05 19:40:44.917093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:61168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.137 [2024-12-05 19:40:44.917107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:39.137 [2024-12-05 19:40:44.917126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:61176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.137 [2024-12-05 19:40:44.917141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:39.137 [2024-12-05 19:40:44.917160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:61184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.137 [2024-12-05 19:40:44.917176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:39.137 [2024-12-05 19:40:44.917194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:61192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.137 [2024-12-05 19:40:44.917208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:39.137 [2024-12-05 19:40:44.917244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:61200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.137 [2024-12-05 19:40:44.917259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:39.137 [2024-12-05 19:40:44.917294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:61208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.137 [2024-12-05 19:40:44.917310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:39.137 [2024-12-05 19:40:44.917329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:61216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.137 [2024-12-05 19:40:44.917344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:39.137 [2024-12-05 19:40:44.917363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:61224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.137 [2024-12-05 19:40:44.917378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:39.137 [2024-12-05 19:40:44.917397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:61232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.137 [2024-12-05 19:40:44.917411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:39.137 [2024-12-05 19:40:44.917438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:61240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.137 [2024-12-05 19:40:44.917454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:39.137 [2024-12-05 19:40:44.917474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:61248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.137 [2024-12-05 19:40:44.917488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:39.137 [2024-12-05 19:40:44.917507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:60872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.137 [2024-12-05 19:40:44.917522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:39.137 [2024-12-05 19:40:44.917541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:60880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.137 [2024-12-05 19:40:44.917558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:39.137 [2024-12-05 19:40:44.917578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:60888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.137 [2024-12-05 19:40:44.917610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:39.137 [2024-12-05 19:40:44.917647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:60896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.137 [2024-12-05 19:40:44.917662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:39.137 [2024-12-05 19:40:44.917683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:60904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.137 [2024-12-05 19:40:44.917698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:39.137 [2024-12-05 19:40:44.917719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:60912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.137 [2024-12-05 19:40:44.917734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:39.137 [2024-12-05 19:40:44.917754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:60920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.137 [2024-12-05 19:40:44.917770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:39.137 [2024-12-05 19:40:44.917792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:60928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.137 [2024-12-05 19:40:44.917807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:39.137 [2024-12-05 19:40:44.917828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:61256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.137 [2024-12-05 19:40:44.917844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:39.137 [2024-12-05 19:40:44.917864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:61264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.137 [2024-12-05 19:40:44.917880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:39.137 [2024-12-05 19:40:44.917900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:61272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.137 [2024-12-05 19:40:44.917952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:39.137 [2024-12-05 19:40:44.917973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:61280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.137 [2024-12-05 19:40:44.917988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:39.137 [2024-12-05 19:40:44.918022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:61288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.137 [2024-12-05 19:40:44.918037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:39.137 [2024-12-05 19:40:44.918055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:61296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.137 [2024-12-05 19:40:44.918070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:39.137 [2024-12-05 19:40:44.918090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:61304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.137 [2024-12-05 19:40:44.918105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:39.137 [2024-12-05 19:40:44.918124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:61312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.137 [2024-12-05 19:40:44.918138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:39.137 [2024-12-05 19:40:44.918157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:61320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.137 [2024-12-05 19:40:44.918171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:39.137 [2024-12-05 19:40:44.918201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:61328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.137 [2024-12-05 19:40:44.918218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:39.137 [2024-12-05 19:40:44.918238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:61336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.137 [2024-12-05 19:40:44.918252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:39.137 [2024-12-05 19:40:44.918271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:61344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.137 [2024-12-05 19:40:44.918286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:39.137 [2024-12-05 19:40:44.918304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:61352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.137 [2024-12-05 19:40:44.918318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:39.137 [2024-12-05 19:40:44.918337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:61360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.137 [2024-12-05 19:40:44.918352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:39.137 [2024-12-05 19:40:44.918370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:61368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.137 [2024-12-05 19:40:44.918391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:39.137 [2024-12-05 19:40:44.918412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:61376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.137 [2024-12-05 19:40:44.918426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:39.137 [2024-12-05 19:40:44.918445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:60936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.137 [2024-12-05 19:40:44.918459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:39.137 [2024-12-05 19:40:44.918478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:60944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.137 [2024-12-05 19:40:44.918492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:39.137 [2024-12-05 19:40:44.918512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:60952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.138 [2024-12-05 19:40:44.918526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:39.138 [2024-12-05 19:40:44.918545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:60960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.138 [2024-12-05 19:40:44.918560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:39.138 [2024-12-05 19:40:44.918578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:60968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.138 [2024-12-05 19:40:44.918609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:39.138 [2024-12-05 19:40:44.918645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:60976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.138 [2024-12-05 19:40:44.918660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:39.138 [2024-12-05 19:40:44.918681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:60984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.138 [2024-12-05 19:40:44.918696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:39.138 [2024-12-05 19:40:44.918716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:60992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.138 [2024-12-05 19:40:44.918731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:39.138 [2024-12-05 19:40:44.918756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:61384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.138 [2024-12-05 19:40:44.918772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:39.138 [2024-12-05 19:40:44.918792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:61392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.138 [2024-12-05 19:40:44.918807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:39.138 [2024-12-05 19:40:44.918827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:61400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.138 [2024-12-05 19:40:44.918843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:39.138 [2024-12-05 19:40:44.918875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:61408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.138 [2024-12-05 19:40:44.918890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:39.138 [2024-12-05 19:40:44.918910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:61416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.138 [2024-12-05 19:40:44.918925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:39.138 [2024-12-05 19:40:44.918946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:61424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.138 [2024-12-05 19:40:44.918961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:39.138 [2024-12-05 19:40:44.919010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:61432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.138 [2024-12-05 19:40:44.919024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:39.138 [2024-12-05 19:40:44.919043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:61440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.138 [2024-12-05 19:40:44.919057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:39.138 [2024-12-05 19:40:44.919077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:61448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.138 [2024-12-05 19:40:44.919091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:39.138 [2024-12-05 19:40:44.919110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:61456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.138 [2024-12-05 19:40:44.919140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:39.138 [2024-12-05 19:40:44.919160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:61464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.138 [2024-12-05 19:40:44.919174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:39.138 [2024-12-05 19:40:44.919203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:61472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.138 [2024-12-05 19:40:44.919220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:39.138 [2024-12-05 19:40:44.919240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:61480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.138 [2024-12-05 19:40:44.919255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.138 [2024-12-05 19:40:44.919275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:61488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.138 [2024-12-05 19:40:44.919290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:39.138 [2024-12-05 19:40:44.919309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:61496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.138 [2024-12-05 19:40:44.919324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:39.138 [2024-12-05 19:40:44.919350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:61504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.138 [2024-12-05 19:40:44.919365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:39.138 [2024-12-05 19:40:44.919385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:61512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.138 [2024-12-05 19:40:44.919400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:39.138 [2024-12-05 19:40:44.919419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:61520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.138 [2024-12-05 19:40:44.919433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:39.138 [2024-12-05 19:40:44.919453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:61528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.138 [2024-12-05 19:40:44.919469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:39.138 [2024-12-05 19:40:44.919489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:61536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.138 [2024-12-05 19:40:44.919503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:39.138 [2024-12-05 19:40:44.919537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:61000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.138 [2024-12-05 19:40:44.919551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:39.138 [2024-12-05 19:40:44.919570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:61008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.138 [2024-12-05 19:40:44.919585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:39.138 [2024-12-05 19:40:44.919637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:61016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.138 [2024-12-05 19:40:44.919653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:39.138 [2024-12-05 19:40:44.919673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:61024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.138 [2024-12-05 19:40:44.919688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:39.138 [2024-12-05 19:40:44.919709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:61032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.138 [2024-12-05 19:40:44.919724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:39.138 [2024-12-05 19:40:44.919745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:61040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.138 [2024-12-05 19:40:44.919760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:39.138 [2024-12-05 19:40:44.919781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:61048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.138 [2024-12-05 19:40:44.919797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:39.138 [2024-12-05 19:40:44.921282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:61056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.138 [2024-12-05 19:40:44.921333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:39.138 [2024-12-05 19:40:44.921361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:61544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.138 [2024-12-05 19:40:44.921379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:39.138 [2024-12-05 19:40:44.921399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:61552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.138 [2024-12-05 19:40:44.921414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:39.138 [2024-12-05 19:40:44.921433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:61560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.138 [2024-12-05 19:40:44.921447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:39.138 [2024-12-05 19:40:44.921467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:61568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.138 [2024-12-05 19:40:44.921481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:39.138 [2024-12-05 19:40:44.921500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:61576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.138 [2024-12-05 19:40:44.921514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:39.138 [2024-12-05 19:40:44.921533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:61584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.139 [2024-12-05 19:40:44.921547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:39.139 [2024-12-05 19:40:44.921567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:61592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.139 [2024-12-05 19:40:44.921583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:39.139 [2024-12-05 19:40:44.921651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:61600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.139 [2024-12-05 19:40:44.921670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:39.139 [2024-12-05 19:40:44.921692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:61608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.139 [2024-12-05 19:40:44.921708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:39.139 [2024-12-05 19:40:44.921728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:61616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.139 [2024-12-05 19:40:44.921743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:39.139 [2024-12-05 19:40:44.921765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:61624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.139 [2024-12-05 19:40:44.921780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:39.139 [2024-12-05 19:40:44.921800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:61632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.139 [2024-12-05 19:40:44.921826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:39.139 [2024-12-05 19:40:44.921848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:61640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.139 [2024-12-05 19:40:44.921864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:39.139 [2024-12-05 19:40:44.921885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:61648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.139 [2024-12-05 19:40:44.921901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:39.139 [2024-12-05 19:40:44.921922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:61656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.139 [2024-12-05 19:40:44.921952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:39.139 [2024-12-05 19:40:44.921976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:61664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.139 [2024-12-05 19:40:44.922007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:39.139 [2024-12-05 19:40:44.922042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:61672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.139 [2024-12-05 19:40:44.922063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:39.139 [2024-12-05 19:40:44.922083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:61680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.139 [2024-12-05 19:40:44.922097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:39.139 8740.67 IOPS, 34.14 MiB/s [2024-12-05T19:41:32.576Z] 8789.00 IOPS, 34.33 MiB/s [2024-12-05T19:41:32.576Z] 8861.91 IOPS, 34.62 MiB/s [2024-12-05T19:41:32.576Z] 8934.75 IOPS, 34.90 MiB/s [2024-12-05T19:41:32.576Z] 8969.31 IOPS, 35.04 MiB/s [2024-12-05T19:41:32.576Z] 8983.50 IOPS, 35.09 MiB/s [2024-12-05T19:41:32.576Z] [2024-12-05 19:40:51.550704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:29256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.139 [2024-12-05 19:40:51.550785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:39.139 [2024-12-05 19:40:51.550859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:29264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.139 [2024-12-05 19:40:51.550881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:39.139 [2024-12-05 19:40:51.550905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:29272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.139 [2024-12-05 19:40:51.550922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:39.139 [2024-12-05 19:40:51.550944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:29280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.139 [2024-12-05 19:40:51.550961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:39.139 [2024-12-05 19:40:51.550982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:29288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.139 [2024-12-05 19:40:51.550998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:39.139 [2024-12-05 19:40:51.551020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:29296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.139 [2024-12-05 19:40:51.551060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:39.139 [2024-12-05 19:40:51.551083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:29304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.139 [2024-12-05 19:40:51.551100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:39.139 [2024-12-05 19:40:51.551121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:29312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.139 [2024-12-05 19:40:51.551137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:39.139 [2024-12-05 19:40:51.551192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:28744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.139 [2024-12-05 19:40:51.551209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:39.139 [2024-12-05 19:40:51.551230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:28752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.139 [2024-12-05 19:40:51.551246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:39.139 [2024-12-05 19:40:51.551267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:28760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.139 [2024-12-05 19:40:51.551282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:39.139 [2024-12-05 19:40:51.551303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:28768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.139 [2024-12-05 19:40:51.551319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:39.139 [2024-12-05 19:40:51.551340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:28776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.139 [2024-12-05 19:40:51.551355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:39.139 [2024-12-05 19:40:51.551375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:28784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.139 [2024-12-05 19:40:51.551391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:39.139 [2024-12-05 19:40:51.551429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:28792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.139 [2024-12-05 19:40:51.551444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:39.139 [2024-12-05 19:40:51.551464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:28800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.139 [2024-12-05 19:40:51.551479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:39.139 [2024-12-05 19:40:51.551499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:28808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.139 [2024-12-05 19:40:51.551514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:39.139 [2024-12-05 19:40:51.551535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:28816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.139 [2024-12-05 19:40:51.551561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:39.139 [2024-12-05 19:40:51.551583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:28824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.139 [2024-12-05 19:40:51.551598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:39.139 [2024-12-05 19:40:51.551619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:28832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.139 [2024-12-05 19:40:51.551634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:39.139 [2024-12-05 19:40:51.551655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:28840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.139 [2024-12-05 19:40:51.551671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:39.139 [2024-12-05 19:40:51.551692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:28848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.139 [2024-12-05 19:40:51.551707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:39.139 [2024-12-05 19:40:51.551727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:28856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.139 [2024-12-05 19:40:51.551743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:39.139 [2024-12-05 19:40:51.551763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:28864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.139 [2024-12-05 19:40:51.551796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:39.139 [2024-12-05 19:40:51.551823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.139 [2024-12-05 19:40:51.551840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:39.139 [2024-12-05 19:40:51.551862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:29328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.140 [2024-12-05 19:40:51.551879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:39.140 [2024-12-05 19:40:51.551910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.140 [2024-12-05 19:40:51.551950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:39.140 [2024-12-05 19:40:51.551974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.140 [2024-12-05 19:40:51.551991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:39.140 [2024-12-05 19:40:51.552013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:29352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.140 [2024-12-05 19:40:51.552029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:39.140 [2024-12-05 19:40:51.552050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:29360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.140 [2024-12-05 19:40:51.552066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:39.140 [2024-12-05 19:40:51.552097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.140 [2024-12-05 19:40:51.552127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:39.140 [2024-12-05 19:40:51.552152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:29376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.140 [2024-12-05 19:40:51.552168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:39.140 [2024-12-05 19:40:51.552190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:28872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.140 [2024-12-05 19:40:51.552214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:39.140 [2024-12-05 19:40:51.552252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:28880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.140 [2024-12-05 19:40:51.552268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:39.140 [2024-12-05 19:40:51.552289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:28888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.140 [2024-12-05 19:40:51.552312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:39.140 [2024-12-05 19:40:51.552334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:28896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.140 [2024-12-05 19:40:51.552350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:39.140 [2024-12-05 19:40:51.552371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.140 [2024-12-05 19:40:51.552393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:39.140 [2024-12-05 19:40:51.552414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:28912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.140 [2024-12-05 19:40:51.552430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:39.140 [2024-12-05 19:40:51.552451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:28920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.140 [2024-12-05 19:40:51.552466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:39.140 [2024-12-05 19:40:51.552487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:28928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.140 [2024-12-05 19:40:51.552517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:39.140 [2024-12-05 19:40:51.552538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:28936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.140 [2024-12-05 19:40:51.552553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:39.140 [2024-12-05 19:40:51.552574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:28944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.140 [2024-12-05 19:40:51.552589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:39.140 [2024-12-05 19:40:51.552617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:28952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.140 [2024-12-05 19:40:51.552633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:39.140 [2024-12-05 19:40:51.552654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:28960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.140 [2024-12-05 19:40:51.552670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:39.140 [2024-12-05 19:40:51.552690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:28968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.140 [2024-12-05 19:40:51.552705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:39.140 [2024-12-05 19:40:51.552726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:28976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.140 [2024-12-05 19:40:51.552741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:39.140 [2024-12-05 19:40:51.552761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:28984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.140 [2024-12-05 19:40:51.552810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:39.140 [2024-12-05 19:40:51.552832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:28992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.140 [2024-12-05 19:40:51.552848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:39.140 [2024-12-05 19:40:51.552896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:29384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.140 [2024-12-05 19:40:51.552918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:39.140 [2024-12-05 19:40:51.552941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:29392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.140 [2024-12-05 19:40:51.552958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:39.140 [2024-12-05 19:40:51.552980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:29400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.140 [2024-12-05 19:40:51.552997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:39.140 [2024-12-05 19:40:51.553019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:29408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.140 [2024-12-05 19:40:51.553035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:39.140 [2024-12-05 19:40:51.553056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:29416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.140 [2024-12-05 19:40:51.553072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:39.140 [2024-12-05 19:40:51.553094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:29424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.140 [2024-12-05 19:40:51.553110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:39.140 [2024-12-05 19:40:51.553172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:29432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.140 [2024-12-05 19:40:51.553188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:39.140 [2024-12-05 19:40:51.553221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:29440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.140 [2024-12-05 19:40:51.553239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:39.140 [2024-12-05 19:40:51.553259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:29448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.140 [2024-12-05 19:40:51.553275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:39.140 [2024-12-05 19:40:51.553295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:29456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.140 [2024-12-05 19:40:51.553310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:39.140 [2024-12-05 19:40:51.553331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:29464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.141 [2024-12-05 19:40:51.553347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:39.141 [2024-12-05 19:40:51.553368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:29472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.141 [2024-12-05 19:40:51.553383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:39.141 [2024-12-05 19:40:51.553403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:29480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.141 [2024-12-05 19:40:51.553418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:39.141 [2024-12-05 19:40:51.553439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:29488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.141 [2024-12-05 19:40:51.553454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:39.141 [2024-12-05 19:40:51.553474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:29496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.141 [2024-12-05 19:40:51.553489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:39.141 [2024-12-05 19:40:51.553509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:29504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.141 [2024-12-05 19:40:51.553525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:39.141 [2024-12-05 19:40:51.553545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:29000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.141 [2024-12-05 19:40:51.553560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:39.141 [2024-12-05 19:40:51.553581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:29008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.141 [2024-12-05 19:40:51.553597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:39.141 [2024-12-05 19:40:51.553617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:29016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.141 [2024-12-05 19:40:51.553640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:39.141 [2024-12-05 19:40:51.553662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:29024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.141 [2024-12-05 19:40:51.553677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:39.141 [2024-12-05 19:40:51.553698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:29032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.141 [2024-12-05 19:40:51.553713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:39.141 [2024-12-05 19:40:51.553734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:29040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.141 [2024-12-05 19:40:51.553749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:39.141 [2024-12-05 19:40:51.553786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.141 [2024-12-05 19:40:51.553802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:39.141 [2024-12-05 19:40:51.553824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:29056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.141 [2024-12-05 19:40:51.553840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:39.141 [2024-12-05 19:40:51.553862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:29064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.141 [2024-12-05 19:40:51.553878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:39.141 [2024-12-05 19:40:51.553900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:29072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.141 [2024-12-05 19:40:51.553916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:39.141 [2024-12-05 19:40:51.553937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:29080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.141 [2024-12-05 19:40:51.553954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:39.141 [2024-12-05 19:40:51.553976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:29088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.141 [2024-12-05 19:40:51.553992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:39.141 [2024-12-05 19:40:51.554013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.141 [2024-12-05 19:40:51.554029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:39.141 [2024-12-05 19:40:51.554051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:29104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.141 [2024-12-05 19:40:51.554067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:39.141 [2024-12-05 19:40:51.554089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:29112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.141 [2024-12-05 19:40:51.554111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.141 [2024-12-05 19:40:51.554144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:29120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.141 [2024-12-05 19:40:51.554163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:39.141 [2024-12-05 19:40:51.554189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:29512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.141 [2024-12-05 19:40:51.554207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:39.141 [2024-12-05 19:40:51.554229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:29520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.141 [2024-12-05 19:40:51.554245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:39.141 [2024-12-05 19:40:51.554267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:29528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.141 [2024-12-05 19:40:51.554283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:39.141 [2024-12-05 19:40:51.554305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:29536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.141 [2024-12-05 19:40:51.554321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:39.141 [2024-12-05 19:40:51.554343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:29544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.141 [2024-12-05 19:40:51.554359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:39.141 [2024-12-05 19:40:51.554380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:29552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.141 [2024-12-05 19:40:51.554396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:39.141 [2024-12-05 19:40:51.554418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:29560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.141 [2024-12-05 19:40:51.554434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:39.141 [2024-12-05 19:40:51.554456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:29568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.141 [2024-12-05 19:40:51.554472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:39.141 [2024-12-05 19:40:51.554493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:29576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.141 [2024-12-05 19:40:51.554510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:39.141 [2024-12-05 19:40:51.554546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:29584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.141 [2024-12-05 19:40:51.554561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:39.141 [2024-12-05 19:40:51.554582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:29592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.141 [2024-12-05 19:40:51.554598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:39.141 [2024-12-05 19:40:51.554628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:29600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.141 [2024-12-05 19:40:51.554645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:39.141 [2024-12-05 19:40:51.554666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:29608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.141 [2024-12-05 19:40:51.554682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:39.141 [2024-12-05 19:40:51.554703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:29616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.141 [2024-12-05 19:40:51.554719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:39.141 [2024-12-05 19:40:51.554740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:29624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.141 [2024-12-05 19:40:51.554756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:39.141 [2024-12-05 19:40:51.554794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:29632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.141 [2024-12-05 19:40:51.554810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:39.141 [2024-12-05 19:40:51.554832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:29128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.141 [2024-12-05 19:40:51.554848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:39.141 [2024-12-05 19:40:51.554870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:29136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.142 [2024-12-05 19:40:51.554886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:39.142 [2024-12-05 19:40:51.554908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:29144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.142 [2024-12-05 19:40:51.554924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:39.142 [2024-12-05 19:40:51.554946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:29152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.142 [2024-12-05 19:40:51.554962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:39.142 [2024-12-05 19:40:51.554990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:29160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.142 [2024-12-05 19:40:51.555006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:39.142 [2024-12-05 19:40:51.555028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.142 [2024-12-05 19:40:51.555044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:39.142 [2024-12-05 19:40:51.555065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:29176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.142 [2024-12-05 19:40:51.555082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:39.142 [2024-12-05 19:40:51.555110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:29184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.142 [2024-12-05 19:40:51.555127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:39.142 [2024-12-05 19:40:51.555173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:29192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.142 [2024-12-05 19:40:51.555208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:39.142 [2024-12-05 19:40:51.555230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:29200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.142 [2024-12-05 19:40:51.555246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:39.142 [2024-12-05 19:40:51.555268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:29208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.142 [2024-12-05 19:40:51.555284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:39.142 [2024-12-05 19:40:51.555306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.142 [2024-12-05 19:40:51.555323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:39.142 [2024-12-05 19:40:51.555352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:29224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.142 [2024-12-05 19:40:51.555369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:39.142 [2024-12-05 19:40:51.555390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:29232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.142 [2024-12-05 19:40:51.555407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:39.142 [2024-12-05 19:40:51.555429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:29240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.142 [2024-12-05 19:40:51.555445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:39.142 [2024-12-05 19:40:51.556222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:29248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.142 [2024-12-05 19:40:51.556266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:39.142 [2024-12-05 19:40:51.556300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:29640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.142 [2024-12-05 19:40:51.556318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:39.142 [2024-12-05 19:40:51.556346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:29648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.142 [2024-12-05 19:40:51.556380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:39.142 [2024-12-05 19:40:51.556409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.142 [2024-12-05 19:40:51.556425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:39.142 [2024-12-05 19:40:51.556453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:29664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.142 [2024-12-05 19:40:51.556479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:39.142 [2024-12-05 19:40:51.556509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:29672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.142 [2024-12-05 19:40:51.556526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:39.142 [2024-12-05 19:40:51.556553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:29680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.142 [2024-12-05 19:40:51.556568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:39.142 [2024-12-05 19:40:51.556596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:29688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.142 [2024-12-05 19:40:51.556612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:39.142 [2024-12-05 19:40:51.556656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:29696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.142 [2024-12-05 19:40:51.556677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:39.142 [2024-12-05 19:40:51.556705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.142 [2024-12-05 19:40:51.556721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:39.142 [2024-12-05 19:40:51.556749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:29712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.142 [2024-12-05 19:40:51.556765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:39.142 [2024-12-05 19:40:51.556810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:29720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.142 [2024-12-05 19:40:51.556826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:39.142 [2024-12-05 19:40:51.556855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:29728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.142 [2024-12-05 19:40:51.556871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:39.142 [2024-12-05 19:40:51.556905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:29736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.142 [2024-12-05 19:40:51.556922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:39.142 [2024-12-05 19:40:51.556951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:29744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.142 [2024-12-05 19:40:51.556967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:39.142 [2024-12-05 19:40:51.556995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:29752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.142 [2024-12-05 19:40:51.557012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:39.142 [2024-12-05 19:40:51.557041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:29760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.143 [2024-12-05 19:40:51.557065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:39.143 8977.67 IOPS, 35.07 MiB/s [2024-12-05T19:41:32.580Z] 8437.12 IOPS, 32.96 MiB/s [2024-12-05T19:41:32.580Z] 8488.12 IOPS, 33.16 MiB/s [2024-12-05T19:41:32.580Z] 8524.11 IOPS, 33.30 MiB/s [2024-12-05T19:41:32.580Z] 8537.16 IOPS, 33.35 MiB/s [2024-12-05T19:41:32.580Z] 8550.10 IOPS, 33.40 MiB/s [2024-12-05T19:41:32.580Z] 8566.57 IOPS, 33.46 MiB/s [2024-12-05T19:41:32.580Z] 8578.64 IOPS, 33.51 MiB/s [2024-12-05T19:41:32.580Z] [2024-12-05 19:40:58.677394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:74320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.143 [2024-12-05 19:40:58.677458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:39.143 [2024-12-05 19:40:58.677531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:74328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.143 [2024-12-05 19:40:58.677553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:39.143 [2024-12-05 19:40:58.677593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:74336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.143 [2024-12-05 19:40:58.677611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:39.143 [2024-12-05 19:40:58.677634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:74344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.143 [2024-12-05 19:40:58.677651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:39.143 [2024-12-05 19:40:58.677688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:74352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.143 [2024-12-05 19:40:58.677703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:39.143 [2024-12-05 19:40:58.677725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:74360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.143 [2024-12-05 19:40:58.677741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:39.143 [2024-12-05 19:40:58.677762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:74368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.143 [2024-12-05 19:40:58.677778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:39.143 [2024-12-05 19:40:58.677799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:74376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.143 [2024-12-05 19:40:58.677815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:39.143 [2024-12-05 19:40:58.677836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:73808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.143 [2024-12-05 19:40:58.677851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:39.143 [2024-12-05 19:40:58.677873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:73816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.143 [2024-12-05 19:40:58.677889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:39.143 [2024-12-05 19:40:58.677910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:73824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.143 [2024-12-05 19:40:58.677948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:39.143 [2024-12-05 19:40:58.677992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:73832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.143 [2024-12-05 19:40:58.678009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:39.143 [2024-12-05 19:40:58.678030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:73840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.143 [2024-12-05 19:40:58.678045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:39.143 [2024-12-05 19:40:58.678065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:73848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.143 [2024-12-05 19:40:58.678081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:39.143 [2024-12-05 19:40:58.678101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:73856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.143 [2024-12-05 19:40:58.678116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:39.143 [2024-12-05 19:40:58.678137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:73864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.143 [2024-12-05 19:40:58.678152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:39.143 [2024-12-05 19:40:58.678184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:73872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.143 [2024-12-05 19:40:58.678202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:39.143 [2024-12-05 19:40:58.678223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:73880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.143 [2024-12-05 19:40:58.678239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:39.143 [2024-12-05 19:40:58.678262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:73888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.143 [2024-12-05 19:40:58.678278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:39.143 [2024-12-05 19:40:58.678299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:73896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.143 [2024-12-05 19:40:58.678314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:39.143 [2024-12-05 19:40:58.678335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:73904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.143 [2024-12-05 19:40:58.678350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:39.143 [2024-12-05 19:40:58.678371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:73912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.143 [2024-12-05 19:40:58.678386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:39.143 [2024-12-05 19:40:58.678406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:73920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.143 [2024-12-05 19:40:58.678421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:39.143 [2024-12-05 19:40:58.678442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:73928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.143 [2024-12-05 19:40:58.678466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:39.143 [2024-12-05 19:40:58.678489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:73936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.143 [2024-12-05 19:40:58.678507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:39.143 [2024-12-05 19:40:58.678528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:73944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.143 [2024-12-05 19:40:58.678543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:39.144 [2024-12-05 19:40:58.678564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:73952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.144 [2024-12-05 19:40:58.678597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:39.144 [2024-12-05 19:40:58.678619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:73960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.144 [2024-12-05 19:40:58.678635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:39.144 [2024-12-05 19:40:58.678656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:73968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.144 [2024-12-05 19:40:58.678672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:39.144 [2024-12-05 19:40:58.678694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:73976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.144 [2024-12-05 19:40:58.678710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:39.144 [2024-12-05 19:40:58.678731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:73984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.144 [2024-12-05 19:40:58.678747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:39.144 [2024-12-05 19:40:58.678769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:73992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.144 [2024-12-05 19:40:58.678802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:39.144 [2024-12-05 19:40:58.678842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:74384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.144 [2024-12-05 19:40:58.678864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:39.144 [2024-12-05 19:40:58.678891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:74392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.144 [2024-12-05 19:40:58.678908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:39.144 [2024-12-05 19:40:58.678946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:74400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.144 [2024-12-05 19:40:58.678963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:39.144 [2024-12-05 19:40:58.678985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:74408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.144 [2024-12-05 19:40:58.679010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:39.144 [2024-12-05 19:40:58.679032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:74416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.144 [2024-12-05 19:40:58.679049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:39.144 [2024-12-05 19:40:58.679070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:74424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.144 [2024-12-05 19:40:58.679087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:39.144 [2024-12-05 19:40:58.679108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:74432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.144 [2024-12-05 19:40:58.679140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:39.144 [2024-12-05 19:40:58.679177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:74440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.144 [2024-12-05 19:40:58.679205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:39.144 [2024-12-05 19:40:58.679230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:74000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.144 [2024-12-05 19:40:58.679246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:39.144 [2024-12-05 19:40:58.679267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:74008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.144 [2024-12-05 19:40:58.679283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:39.144 [2024-12-05 19:40:58.679305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:74016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.144 [2024-12-05 19:40:58.679321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:39.144 [2024-12-05 19:40:58.679343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:74024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.144 [2024-12-05 19:40:58.679359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:39.144 [2024-12-05 19:40:58.679380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:74032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.144 [2024-12-05 19:40:58.679396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:39.144 [2024-12-05 19:40:58.679417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:74040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.144 [2024-12-05 19:40:58.679433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:39.144 [2024-12-05 19:40:58.679455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:74048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.144 [2024-12-05 19:40:58.679471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:39.144 [2024-12-05 19:40:58.679493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:74056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.144 [2024-12-05 19:40:58.679519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:39.144 [2024-12-05 19:40:58.679557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:74448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.144 [2024-12-05 19:40:58.679590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:39.144 [2024-12-05 19:40:58.679612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:74456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.144 [2024-12-05 19:40:58.679628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:39.144 [2024-12-05 19:40:58.679650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:74464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.144 [2024-12-05 19:40:58.679666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:39.144 [2024-12-05 19:40:58.679688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:74472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.144 [2024-12-05 19:40:58.679704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:39.144 [2024-12-05 19:40:58.679726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:74480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.144 [2024-12-05 19:40:58.679742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:39.144 [2024-12-05 19:40:58.679764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:74488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.144 [2024-12-05 19:40:58.679780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:39.145 [2024-12-05 19:40:58.679801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:74496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.145 [2024-12-05 19:40:58.679817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:39.145 [2024-12-05 19:40:58.679838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:74504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.145 [2024-12-05 19:40:58.679854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:39.145 [2024-12-05 19:40:58.679876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:74512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.145 [2024-12-05 19:40:58.679892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:39.145 [2024-12-05 19:40:58.679939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:74520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.145 [2024-12-05 19:40:58.679963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:39.145 [2024-12-05 19:40:58.679989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:74528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.145 [2024-12-05 19:40:58.680006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:39.145 [2024-12-05 19:40:58.680028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:74536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.145 [2024-12-05 19:40:58.680044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:39.145 [2024-12-05 19:40:58.680074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:74544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.145 [2024-12-05 19:40:58.680092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:39.145 [2024-12-05 19:40:58.680114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:74552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.145 [2024-12-05 19:40:58.680144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:39.145 [2024-12-05 19:40:58.680169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:74560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.145 [2024-12-05 19:40:58.680186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:39.145 [2024-12-05 19:40:58.680209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:74568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.145 [2024-12-05 19:40:58.680226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:39.145 [2024-12-05 19:40:58.680268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:74064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.145 [2024-12-05 19:40:58.680283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:39.145 [2024-12-05 19:40:58.680304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:74072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.145 [2024-12-05 19:40:58.680320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:39.145 [2024-12-05 19:40:58.680341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:74080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.145 [2024-12-05 19:40:58.680358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:39.145 [2024-12-05 19:40:58.680379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:74088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.145 [2024-12-05 19:40:58.680395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:39.145 [2024-12-05 19:40:58.680415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:74096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.145 [2024-12-05 19:40:58.680431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:39.145 [2024-12-05 19:40:58.680452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:74104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.145 [2024-12-05 19:40:58.680468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:39.145 [2024-12-05 19:40:58.680489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:74112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.145 [2024-12-05 19:40:58.680505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:39.145 [2024-12-05 19:40:58.680525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:74120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.145 [2024-12-05 19:40:58.680541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:39.145 [2024-12-05 19:40:58.680570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:74128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.145 [2024-12-05 19:40:58.680587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:39.145 [2024-12-05 19:40:58.680608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:74136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.145 [2024-12-05 19:40:58.680624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:39.145 [2024-12-05 19:40:58.680645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:74144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.145 [2024-12-05 19:40:58.680661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:39.145 [2024-12-05 19:40:58.680681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:74152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.145 [2024-12-05 19:40:58.680698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:39.145 [2024-12-05 19:40:58.680720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:74160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.145 [2024-12-05 19:40:58.680736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:39.145 [2024-12-05 19:40:58.680757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:74168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.145 [2024-12-05 19:40:58.680773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:39.145 [2024-12-05 19:40:58.680793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:74176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.145 [2024-12-05 19:40:58.680809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:39.145 [2024-12-05 19:40:58.680830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:74184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.145 [2024-12-05 19:40:58.680847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:39.145 [2024-12-05 19:40:58.680871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:74576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.145 [2024-12-05 19:40:58.680888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:39.145 [2024-12-05 19:40:58.680909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:74584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.146 [2024-12-05 19:40:58.680924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:39.146 [2024-12-05 19:40:58.680946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:74592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.146 [2024-12-05 19:40:58.680962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:39.146 [2024-12-05 19:40:58.680983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:74600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.146 [2024-12-05 19:40:58.680998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:39.146 [2024-12-05 19:40:58.681019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:74608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.146 [2024-12-05 19:40:58.681041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:39.146 [2024-12-05 19:40:58.681063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:74616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.146 [2024-12-05 19:40:58.681079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:39.146 [2024-12-05 19:40:58.681099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:74624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.146 [2024-12-05 19:40:58.681115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:39.146 [2024-12-05 19:40:58.681149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:74632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.146 [2024-12-05 19:40:58.681166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:39.146 [2024-12-05 19:40:58.681187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:74640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.146 [2024-12-05 19:40:58.681203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:39.146 [2024-12-05 19:40:58.681224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:74648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.146 [2024-12-05 19:40:58.681240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:39.146 [2024-12-05 19:40:58.681277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:74656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.146 [2024-12-05 19:40:58.681294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:39.146 [2024-12-05 19:40:58.681316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:74664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.146 [2024-12-05 19:40:58.681331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:39.146 [2024-12-05 19:40:58.681353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:74672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.146 [2024-12-05 19:40:58.681370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.146 [2024-12-05 19:40:58.681409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:74680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.146 [2024-12-05 19:40:58.681426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:39.146 [2024-12-05 19:40:58.681447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.146 [2024-12-05 19:40:58.681464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:39.146 [2024-12-05 19:40:58.681486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:74696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.146 [2024-12-05 19:40:58.681503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:39.146 [2024-12-05 19:40:58.681524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:74192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.146 [2024-12-05 19:40:58.681548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:39.146 [2024-12-05 19:40:58.681572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:74200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.146 [2024-12-05 19:40:58.681588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:39.146 [2024-12-05 19:40:58.681611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:74208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.146 [2024-12-05 19:40:58.681636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:39.146 [2024-12-05 19:40:58.681674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:74216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.146 [2024-12-05 19:40:58.681690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:39.146 [2024-12-05 19:40:58.681712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:74224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.146 [2024-12-05 19:40:58.681728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:39.146 [2024-12-05 19:40:58.681749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:74232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.146 [2024-12-05 19:40:58.681765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:39.146 [2024-12-05 19:40:58.681787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:74240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.146 [2024-12-05 19:40:58.681803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:39.146 [2024-12-05 19:40:58.681824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:74248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.146 [2024-12-05 19:40:58.681841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:39.146 [2024-12-05 19:40:58.681862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:74256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.146 [2024-12-05 19:40:58.681878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:39.146 [2024-12-05 19:40:58.681899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:74264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.146 [2024-12-05 19:40:58.681915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:39.146 [2024-12-05 19:40:58.681937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:74272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.146 [2024-12-05 19:40:58.681953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:39.146 [2024-12-05 19:40:58.681974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:74280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.146 [2024-12-05 19:40:58.681990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:39.146 [2024-12-05 19:40:58.682012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:74288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.146 [2024-12-05 19:40:58.682028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:39.146 [2024-12-05 19:40:58.682073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:74296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.146 [2024-12-05 19:40:58.682090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:39.146 [2024-12-05 19:40:58.682113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:74304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.146 [2024-12-05 19:40:58.682129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:39.146 [2024-12-05 19:40:58.682884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:74312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.146 [2024-12-05 19:40:58.682913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:39.146 [2024-12-05 19:40:58.682947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:74704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.146 [2024-12-05 19:40:58.682965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:39.146 [2024-12-05 19:40:58.682995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:74712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.146 [2024-12-05 19:40:58.683012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:39.146 [2024-12-05 19:40:58.683041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:74720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.146 [2024-12-05 19:40:58.683062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:39.146 [2024-12-05 19:40:58.683092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:74728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.146 [2024-12-05 19:40:58.683121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:39.146 [2024-12-05 19:40:58.683154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:74736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.146 [2024-12-05 19:40:58.683171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:39.146 [2024-12-05 19:40:58.683200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:74744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.146 [2024-12-05 19:40:58.683217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:39.146 [2024-12-05 19:40:58.683247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:74752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.146 [2024-12-05 19:40:58.683264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:39.146 [2024-12-05 19:40:58.683309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:74760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.146 [2024-12-05 19:40:58.683336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:39.147 [2024-12-05 19:40:58.683366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:74768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.147 [2024-12-05 19:40:58.683383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:39.147 [2024-12-05 19:40:58.683424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:74776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.147 [2024-12-05 19:40:58.683442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:39.147 [2024-12-05 19:40:58.683471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:74784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.147 [2024-12-05 19:40:58.683487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:39.147 [2024-12-05 19:40:58.683516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:74792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.147 [2024-12-05 19:40:58.683533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:39.147 [2024-12-05 19:40:58.683561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:74800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.147 [2024-12-05 19:40:58.683578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:39.147 [2024-12-05 19:40:58.683607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:74808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.147 [2024-12-05 19:40:58.683639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:39.147 [2024-12-05 19:40:58.683668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:74816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.147 [2024-12-05 19:40:58.683684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:39.147 [2024-12-05 19:40:58.683712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.147 [2024-12-05 19:40:58.683729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:39.147 8236.96 IOPS, 32.18 MiB/s [2024-12-05T19:41:32.584Z] 7893.75 IOPS, 30.83 MiB/s [2024-12-05T19:41:32.584Z] 7578.00 IOPS, 29.60 MiB/s [2024-12-05T19:41:32.584Z] 7286.54 IOPS, 28.46 MiB/s [2024-12-05T19:41:32.584Z] 7016.67 IOPS, 27.41 MiB/s [2024-12-05T19:41:32.584Z] 6766.07 IOPS, 26.43 MiB/s [2024-12-05T19:41:32.584Z] 6532.76 IOPS, 25.52 MiB/s [2024-12-05T19:41:32.584Z] 6574.43 IOPS, 25.68 MiB/s [2024-12-05T19:41:32.584Z] 6645.71 IOPS, 25.96 MiB/s [2024-12-05T19:41:32.584Z] 6719.03 IOPS, 26.25 MiB/s [2024-12-05T19:41:32.584Z] 6783.55 IOPS, 26.50 MiB/s [2024-12-05T19:41:32.584Z] 6854.62 IOPS, 26.78 MiB/s [2024-12-05T19:41:32.584Z] 6919.80 IOPS, 27.03 MiB/s [2024-12-05T19:41:32.584Z] [2024-12-05 19:41:12.135062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:9864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.147 [2024-12-05 19:41:12.135130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.147 [2024-12-05 19:41:12.135160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:9872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.147 [2024-12-05 19:41:12.135176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.147 [2024-12-05 19:41:12.135192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:9880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.147 [2024-12-05 19:41:12.135213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.147 [2024-12-05 19:41:12.135229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.147 [2024-12-05 19:41:12.135242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.147 [2024-12-05 19:41:12.135314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:9896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.147 [2024-12-05 19:41:12.135329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.147 [2024-12-05 19:41:12.135344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:9904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.147 [2024-12-05 19:41:12.135357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.147 [2024-12-05 19:41:12.135371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:9912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.147 [2024-12-05 19:41:12.135384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.147 [2024-12-05 19:41:12.135399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:9920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.147 [2024-12-05 19:41:12.135412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.147 [2024-12-05 19:41:12.135427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.147 [2024-12-05 19:41:12.135440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.147 [2024-12-05 19:41:12.135471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.147 [2024-12-05 19:41:12.135483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.147 [2024-12-05 19:41:12.135497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:10392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.147 [2024-12-05 19:41:12.135510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.147 [2024-12-05 19:41:12.135524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:10400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.147 [2024-12-05 19:41:12.135536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.147 [2024-12-05 19:41:12.135551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:10408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.147 [2024-12-05 19:41:12.135563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.147 [2024-12-05 19:41:12.135577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:10416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.147 [2024-12-05 19:41:12.135590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.147 [2024-12-05 19:41:12.135604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:10424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.147 [2024-12-05 19:41:12.135617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.147 [2024-12-05 19:41:12.135631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:10432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.147 [2024-12-05 19:41:12.135643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.147 [2024-12-05 19:41:12.135657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:10440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.147 [2024-12-05 19:41:12.135669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.147 [2024-12-05 19:41:12.135693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:10448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.147 [2024-12-05 19:41:12.135707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.147 [2024-12-05 19:41:12.135721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:10456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.147 [2024-12-05 19:41:12.135734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.147 [2024-12-05 19:41:12.135749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:10464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.147 [2024-12-05 19:41:12.135762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.147 [2024-12-05 19:41:12.135776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:10472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.147 [2024-12-05 19:41:12.135789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.147 [2024-12-05 19:41:12.135803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:10480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.147 [2024-12-05 19:41:12.135816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.147 [2024-12-05 19:41:12.135830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:10488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.147 [2024-12-05 19:41:12.135843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.147 [2024-12-05 19:41:12.135858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:10496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.147 [2024-12-05 19:41:12.135870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.147 [2024-12-05 19:41:12.135884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:10504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.147 [2024-12-05 19:41:12.135897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.147 [2024-12-05 19:41:12.135911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.147 [2024-12-05 19:41:12.135954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.147 [2024-12-05 19:41:12.135970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:10520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.147 [2024-12-05 19:41:12.135984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.147 [2024-12-05 19:41:12.135999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:10528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.147 [2024-12-05 19:41:12.136013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.147 [2024-12-05 19:41:12.136028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:9928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.148 [2024-12-05 19:41:12.136042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.148 [2024-12-05 19:41:12.136057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:9936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.148 [2024-12-05 19:41:12.136078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.148 [2024-12-05 19:41:12.136095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:9944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.148 [2024-12-05 19:41:12.136109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.148 [2024-12-05 19:41:12.136124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.148 [2024-12-05 19:41:12.136152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.148 [2024-12-05 19:41:12.136169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:9960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.148 [2024-12-05 19:41:12.136183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.148 [2024-12-05 19:41:12.136198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:9968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.148 [2024-12-05 19:41:12.136212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.148 [2024-12-05 19:41:12.136228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:9976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.148 [2024-12-05 19:41:12.136257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.148 [2024-12-05 19:41:12.136275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:9984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.148 [2024-12-05 19:41:12.136288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.148 [2024-12-05 19:41:12.136303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:10536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.148 [2024-12-05 19:41:12.136316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.148 [2024-12-05 19:41:12.136331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:10544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.148 [2024-12-05 19:41:12.136344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.148 [2024-12-05 19:41:12.136358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:10552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.148 [2024-12-05 19:41:12.136371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.148 [2024-12-05 19:41:12.136386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:10560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.148 [2024-12-05 19:41:12.136399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.148 [2024-12-05 19:41:12.136414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:10568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.148 [2024-12-05 19:41:12.136426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.148 [2024-12-05 19:41:12.136441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:10576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.148 [2024-12-05 19:41:12.136455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.148 [2024-12-05 19:41:12.136476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:10584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.148 [2024-12-05 19:41:12.136490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.148 [2024-12-05 19:41:12.136505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:10592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.148 [2024-12-05 19:41:12.136518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.148 [2024-12-05 19:41:12.136532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:10600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.148 [2024-12-05 19:41:12.136545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.148 [2024-12-05 19:41:12.136561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.148 [2024-12-05 19:41:12.136574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.148 [2024-12-05 19:41:12.136589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:10616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.148 [2024-12-05 19:41:12.136602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.148 [2024-12-05 19:41:12.136617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:10624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.148 [2024-12-05 19:41:12.136630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.148 [2024-12-05 19:41:12.136660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:9992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.148 [2024-12-05 19:41:12.136673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.148 [2024-12-05 19:41:12.136687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:10000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.148 [2024-12-05 19:41:12.136701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.148 [2024-12-05 19:41:12.136715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:10008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.148 [2024-12-05 19:41:12.136728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.148 [2024-12-05 19:41:12.136742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:10016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.148 [2024-12-05 19:41:12.136754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.148 [2024-12-05 19:41:12.136769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:10024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.148 [2024-12-05 19:41:12.136781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.148 [2024-12-05 19:41:12.136796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:10032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.148 [2024-12-05 19:41:12.136809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.148 [2024-12-05 19:41:12.136823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:10040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.148 [2024-12-05 19:41:12.136835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.148 [2024-12-05 19:41:12.136855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:10048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.148 [2024-12-05 19:41:12.136869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.148 [2024-12-05 19:41:12.136883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:10056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.148 [2024-12-05 19:41:12.136896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.148 [2024-12-05 19:41:12.136910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:10064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.148 [2024-12-05 19:41:12.136923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.148 [2024-12-05 19:41:12.136938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:10072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.148 [2024-12-05 19:41:12.136951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.148 [2024-12-05 19:41:12.136965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:10080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.148 [2024-12-05 19:41:12.136977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.148 [2024-12-05 19:41:12.136991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:10088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.148 [2024-12-05 19:41:12.137004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.148 [2024-12-05 19:41:12.137019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:10096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.148 [2024-12-05 19:41:12.137032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.148 [2024-12-05 19:41:12.137046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:10104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.148 [2024-12-05 19:41:12.137059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.148 [2024-12-05 19:41:12.137074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:10112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.148 [2024-12-05 19:41:12.137086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.148 [2024-12-05 19:41:12.137101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.148 [2024-12-05 19:41:12.137131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.148 [2024-12-05 19:41:12.137146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:10640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.148 [2024-12-05 19:41:12.137169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.148 [2024-12-05 19:41:12.137187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:10648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.148 [2024-12-05 19:41:12.137200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.148 [2024-12-05 19:41:12.137215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:10656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.148 [2024-12-05 19:41:12.137234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.149 [2024-12-05 19:41:12.137250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:10664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.149 [2024-12-05 19:41:12.137263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.149 [2024-12-05 19:41:12.137278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:10672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.149 [2024-12-05 19:41:12.137291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.149 [2024-12-05 19:41:12.137305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:10680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.149 [2024-12-05 19:41:12.137318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.149 [2024-12-05 19:41:12.137333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:10688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.149 [2024-12-05 19:41:12.137346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.149 [2024-12-05 19:41:12.137361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:10696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.149 [2024-12-05 19:41:12.137374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.149 [2024-12-05 19:41:12.137388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:10704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.149 [2024-12-05 19:41:12.137401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.149 [2024-12-05 19:41:12.137416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:10712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.149 [2024-12-05 19:41:12.137429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.149 [2024-12-05 19:41:12.137445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:10720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.149 [2024-12-05 19:41:12.137458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.149 [2024-12-05 19:41:12.137472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:10728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.149 [2024-12-05 19:41:12.137485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.149 [2024-12-05 19:41:12.137500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:10736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.149 [2024-12-05 19:41:12.137513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.149 [2024-12-05 19:41:12.137527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:10744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.149 [2024-12-05 19:41:12.137540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.149 [2024-12-05 19:41:12.137555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:10752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.149 [2024-12-05 19:41:12.137568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.149 [2024-12-05 19:41:12.137589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:10120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.149 [2024-12-05 19:41:12.137602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.149 [2024-12-05 19:41:12.137618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:10128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.149 [2024-12-05 19:41:12.137631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.149 [2024-12-05 19:41:12.137646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:10136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.149 [2024-12-05 19:41:12.137659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.149 [2024-12-05 19:41:12.137674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:10144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.149 [2024-12-05 19:41:12.137687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.149 [2024-12-05 19:41:12.137702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:10152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.149 [2024-12-05 19:41:12.137715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.149 [2024-12-05 19:41:12.137730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.149 [2024-12-05 19:41:12.137743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.149 [2024-12-05 19:41:12.137757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:10168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.149 [2024-12-05 19:41:12.137770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.149 [2024-12-05 19:41:12.137785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.149 [2024-12-05 19:41:12.137798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.149 [2024-12-05 19:41:12.137812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.149 [2024-12-05 19:41:12.137826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.149 [2024-12-05 19:41:12.137840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:10192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.149 [2024-12-05 19:41:12.137853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.149 [2024-12-05 19:41:12.137868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:10200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.149 [2024-12-05 19:41:12.137881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.149 [2024-12-05 19:41:12.137896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:10208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.149 [2024-12-05 19:41:12.137909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.149 [2024-12-05 19:41:12.137924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:10216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.149 [2024-12-05 19:41:12.137942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.149 [2024-12-05 19:41:12.137957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:10224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.149 [2024-12-05 19:41:12.137970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.149 [2024-12-05 19:41:12.137985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:10232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.149 [2024-12-05 19:41:12.137998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.149 [2024-12-05 19:41:12.138013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:10240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.149 [2024-12-05 19:41:12.138026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.149 [2024-12-05 19:41:12.138041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:10760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.149 [2024-12-05 19:41:12.138054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.149 [2024-12-05 19:41:12.138069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.149 [2024-12-05 19:41:12.138082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.149 [2024-12-05 19:41:12.138097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:10776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.149 [2024-12-05 19:41:12.138119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.149 [2024-12-05 19:41:12.138136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:10784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.149 [2024-12-05 19:41:12.138149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.149 [2024-12-05 19:41:12.138163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:10792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.149 [2024-12-05 19:41:12.138177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.149 [2024-12-05 19:41:12.138191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:10800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.149 [2024-12-05 19:41:12.138204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.149 [2024-12-05 19:41:12.138219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:10808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.149 [2024-12-05 19:41:12.138231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.149 [2024-12-05 19:41:12.138246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:10816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.149 [2024-12-05 19:41:12.138259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.149 [2024-12-05 19:41:12.138273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:10824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.149 [2024-12-05 19:41:12.138286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.149 [2024-12-05 19:41:12.138309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:10832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.149 [2024-12-05 19:41:12.138323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.149 [2024-12-05 19:41:12.138338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:10840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.150 [2024-12-05 19:41:12.138351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.150 [2024-12-05 19:41:12.138366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:10848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.150 [2024-12-05 19:41:12.138379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.150 [2024-12-05 19:41:12.138394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:10856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.150 [2024-12-05 19:41:12.138407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.150 [2024-12-05 19:41:12.138422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:10864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.150 [2024-12-05 19:41:12.138434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.150 [2024-12-05 19:41:12.138449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:10872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.150 [2024-12-05 19:41:12.138463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.150 [2024-12-05 19:41:12.138477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:10880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.150 [2024-12-05 19:41:12.138490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.150 [2024-12-05 19:41:12.138504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:10248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.150 [2024-12-05 19:41:12.138517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.150 [2024-12-05 19:41:12.138533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:10256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.150 [2024-12-05 19:41:12.138546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.150 [2024-12-05 19:41:12.138561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:10264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.150 [2024-12-05 19:41:12.138574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.150 [2024-12-05 19:41:12.138588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:10272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.150 [2024-12-05 19:41:12.138601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.150 [2024-12-05 19:41:12.138616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.150 [2024-12-05 19:41:12.138629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.150 [2024-12-05 19:41:12.138644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:10288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.150 [2024-12-05 19:41:12.138657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.150 [2024-12-05 19:41:12.138677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:10296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.150 [2024-12-05 19:41:12.138690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.150 [2024-12-05 19:41:12.138705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:10304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.150 [2024-12-05 19:41:12.138718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.150 [2024-12-05 19:41:12.138733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:10312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.150 [2024-12-05 19:41:12.138746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.150 [2024-12-05 19:41:12.138761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:10320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.150 [2024-12-05 19:41:12.138774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.150 [2024-12-05 19:41:12.138805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:10328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.150 [2024-12-05 19:41:12.138818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.150 [2024-12-05 19:41:12.138834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:10336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.150 [2024-12-05 19:41:12.138848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.150 [2024-12-05 19:41:12.138863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:10344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.150 [2024-12-05 19:41:12.138877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.150 [2024-12-05 19:41:12.138891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:10352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.150 [2024-12-05 19:41:12.138905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.150 [2024-12-05 19:41:12.138920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:10360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.150 [2024-12-05 19:41:12.138934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.150 [2024-12-05 19:41:12.138989] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:39.150 [2024-12-05 19:41:12.139005] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:39.150 [2024-12-05 19:41:12.139017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10368 len:8 PRP1 0x0 PRP2 0x0 00:21:39.150 [2024-12-05 19:41:12.139030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.150 [2024-12-05 19:41:12.139178] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:39.150 [2024-12-05 19:41:12.139228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.150 [2024-12-05 19:41:12.139244] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:39.150 [2024-12-05 19:41:12.139265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.150 [2024-12-05 19:41:12.139291] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:39.150 [2024-12-05 19:41:12.139305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.150 [2024-12-05 19:41:12.139318] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:39.150 [2024-12-05 19:41:12.139331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.150 [2024-12-05 19:41:12.139352] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11221e0 is same with the state(6) to be set 00:21:39.150 [2024-12-05 19:41:12.140529] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:21:39.150 [2024-12-05 19:41:12.140569] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11221e0 (9): Bad file descriptor 00:21:39.150 [2024-12-05 19:41:12.140913] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:21:39.150 [2024-12-05 19:41:12.140944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11221e0 with addr=10.0.0.3, port=4421 00:21:39.150 [2024-12-05 19:41:12.140961] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11221e0 is same with the state(6) to be set 00:21:39.150 [2024-12-05 19:41:12.141026] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11221e0 (9): Bad file descriptor 00:21:39.150 [2024-12-05 19:41:12.141173] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:21:39.150 [2024-12-05 19:41:12.141199] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:21:39.150 [2024-12-05 19:41:12.141215] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:21:39.150 [2024-12-05 19:41:12.141228] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:21:39.150 [2024-12-05 19:41:12.141243] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:21:39.150 6975.17 IOPS, 27.25 MiB/s [2024-12-05T19:41:32.587Z] 7024.92 IOPS, 27.44 MiB/s [2024-12-05T19:41:32.587Z] 7069.95 IOPS, 27.62 MiB/s [2024-12-05T19:41:32.587Z] 7120.05 IOPS, 27.81 MiB/s [2024-12-05T19:41:32.587Z] 7163.85 IOPS, 27.98 MiB/s [2024-12-05T19:41:32.587Z] 7214.49 IOPS, 28.18 MiB/s [2024-12-05T19:41:32.587Z] 7260.62 IOPS, 28.36 MiB/s [2024-12-05T19:41:32.587Z] 7299.40 IOPS, 28.51 MiB/s [2024-12-05T19:41:32.587Z] 7333.14 IOPS, 28.65 MiB/s [2024-12-05T19:41:32.587Z] 7365.38 IOPS, 28.77 MiB/s [2024-12-05T19:41:32.587Z] [2024-12-05 19:41:22.198117] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:21:39.150 7397.76 IOPS, 28.90 MiB/s [2024-12-05T19:41:32.587Z] 7432.02 IOPS, 29.03 MiB/s [2024-12-05T19:41:32.587Z] 7471.35 IOPS, 29.18 MiB/s [2024-12-05T19:41:32.587Z] 7504.84 IOPS, 29.32 MiB/s [2024-12-05T19:41:32.587Z] 7531.38 IOPS, 29.42 MiB/s [2024-12-05T19:41:32.587Z] 7562.84 IOPS, 29.54 MiB/s [2024-12-05T19:41:32.588Z] 7592.63 IOPS, 29.66 MiB/s [2024-12-05T19:41:32.588Z] 7621.75 IOPS, 29.77 MiB/s [2024-12-05T19:41:32.588Z] 7650.09 IOPS, 29.88 MiB/s [2024-12-05T19:41:32.588Z] 7681.33 IOPS, 30.01 MiB/s [2024-12-05T19:41:32.588Z] Received shutdown signal, test time was about 55.759710 seconds 00:21:39.151 00:21:39.151 Latency(us) 00:21:39.151 [2024-12-05T19:41:32.588Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:39.151 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:39.151 Verification LBA range: start 0x0 length 0x4000 00:21:39.151 Nvme0n1 : 55.76 7697.01 30.07 0.00 0.00 16598.20 1347.96 7046430.72 00:21:39.151 [2024-12-05T19:41:32.588Z] =================================================================================================================== 00:21:39.151 [2024-12-05T19:41:32.588Z] Total : 7697.01 30.07 0.00 0.00 16598.20 1347.96 7046430.72 00:21:39.151 19:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:39.409 19:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:21:39.409 19:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:21:39.409 19:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:21:39.409 19:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:39.409 19:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@121 -- # sync 00:21:39.409 19:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:39.409 19:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@124 -- # set +e 00:21:39.409 19:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:39.409 19:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:39.409 rmmod nvme_tcp 00:21:39.667 rmmod nvme_fabrics 00:21:39.667 rmmod nvme_keyring 00:21:39.667 19:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:39.667 19:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@128 -- # set -e 00:21:39.667 19:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@129 -- # return 0 00:21:39.667 19:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@517 -- # '[' -n 80730 ']' 00:21:39.667 19:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@518 -- # killprocess 80730 00:21:39.667 19:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 80730 ']' 00:21:39.667 19:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 80730 00:21:39.667 19:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname 00:21:39.667 19:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:39.667 19:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80730 00:21:39.667 19:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:39.667 19:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:39.667 killing process with pid 80730 00:21:39.667 19:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80730' 00:21:39.667 19:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 80730 00:21:39.667 19:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 80730 00:21:39.926 19:41:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:39.926 19:41:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:39.926 19:41:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:39.926 19:41:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@297 -- # iptr 00:21:39.926 19:41:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-save 00:21:39.926 19:41:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:39.926 19:41:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:21:39.926 19:41:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:39.926 19:41:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:39.926 19:41:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:39.926 19:41:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:39.926 19:41:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:39.926 19:41:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:39.926 19:41:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:39.926 19:41:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:39.926 19:41:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:39.926 19:41:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:39.926 19:41:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:39.926 19:41:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:39.926 19:41:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:39.926 19:41:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:39.926 19:41:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:40.184 19:41:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:40.184 19:41:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:40.184 19:41:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:40.184 19:41:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:40.184 19:41:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@300 -- # return 0 00:21:40.184 00:21:40.184 real 1m1.988s 00:21:40.184 user 2m52.442s 00:21:40.184 sys 0m18.405s 00:21:40.184 19:41:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:40.184 19:41:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:21:40.184 ************************************ 00:21:40.184 END TEST nvmf_host_multipath 00:21:40.184 ************************************ 00:21:40.184 19:41:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@43 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:21:40.184 19:41:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:40.184 19:41:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:40.184 19:41:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:40.184 ************************************ 00:21:40.184 START TEST nvmf_timeout 00:21:40.184 ************************************ 00:21:40.184 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:21:40.184 * Looking for test storage... 00:21:40.184 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:40.184 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:40.184 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1711 -- # lcov --version 00:21:40.184 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:40.444 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:40.444 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:40.444 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:40.444 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:40.444 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:21:40.444 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:21:40.444 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:21:40.444 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:21:40.444 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:21:40.444 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:21:40.444 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:21:40.444 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:40.444 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@344 -- # case "$op" in 00:21:40.444 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@345 -- # : 1 00:21:40.444 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:40.444 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:40.444 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # decimal 1 00:21:40.444 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=1 00:21:40.444 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:40.444 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 1 00:21:40.444 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:21:40.444 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # decimal 2 00:21:40.444 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=2 00:21:40.444 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:40.444 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 2 00:21:40.444 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:21:40.444 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:40.444 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:40.444 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # return 0 00:21:40.444 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:40.444 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:40.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:40.444 --rc genhtml_branch_coverage=1 00:21:40.444 --rc genhtml_function_coverage=1 00:21:40.444 --rc genhtml_legend=1 00:21:40.444 --rc geninfo_all_blocks=1 00:21:40.444 --rc geninfo_unexecuted_blocks=1 00:21:40.444 00:21:40.444 ' 00:21:40.444 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:40.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:40.444 --rc genhtml_branch_coverage=1 00:21:40.444 --rc genhtml_function_coverage=1 00:21:40.444 --rc genhtml_legend=1 00:21:40.444 --rc geninfo_all_blocks=1 00:21:40.444 --rc geninfo_unexecuted_blocks=1 00:21:40.444 00:21:40.444 ' 00:21:40.444 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:40.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:40.444 --rc genhtml_branch_coverage=1 00:21:40.444 --rc genhtml_function_coverage=1 00:21:40.444 --rc genhtml_legend=1 00:21:40.444 --rc geninfo_all_blocks=1 00:21:40.444 --rc geninfo_unexecuted_blocks=1 00:21:40.444 00:21:40.444 ' 00:21:40.444 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:40.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:40.444 --rc genhtml_branch_coverage=1 00:21:40.444 --rc genhtml_function_coverage=1 00:21:40.444 --rc genhtml_legend=1 00:21:40.444 --rc geninfo_all_blocks=1 00:21:40.444 --rc geninfo_unexecuted_blocks=1 00:21:40.444 00:21:40.444 ' 00:21:40.444 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:40.444 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:21:40.444 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:40.444 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:40.444 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:40.444 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:40.444 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:40.444 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:40.444 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:40.444 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:40.444 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:40.444 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:40.444 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:21:40.444 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:21:40.444 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:40.444 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:40.444 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:40.444 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:40.444 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:40.444 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:21:40.444 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:40.444 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:40.444 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:40.444 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:40.444 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:40.444 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:40.444 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:21:40.444 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:40.444 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@51 -- # : 0 00:21:40.445 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:40.445 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:40.445 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:40.445 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:40.445 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:40.445 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:40.445 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:40.445 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:40.445 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:40.445 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:40.445 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:40.445 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:40.445 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:40.445 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:21:40.445 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:40.445 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:21:40.445 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:40.445 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:40.445 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:40.445 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:40.445 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:40.445 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:40.445 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:40.445 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:40.445 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:21:40.445 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:21:40.445 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:21:40.445 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:21:40.445 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:21:40.445 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@460 -- # nvmf_veth_init 00:21:40.445 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:40.445 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:21:40.445 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:21:40.445 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:40.445 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:40.445 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:21:40.445 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:40.445 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:21:40.445 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:40.445 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:21:40.445 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:40.445 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:40.445 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:40.445 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:40.445 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:40.445 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:40.445 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:21:40.445 Cannot find device "nvmf_init_br" 00:21:40.445 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:21:40.445 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:21:40.445 Cannot find device "nvmf_init_br2" 00:21:40.445 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:21:40.445 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:21:40.445 Cannot find device "nvmf_tgt_br" 00:21:40.445 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # true 00:21:40.445 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:21:40.445 Cannot find device "nvmf_tgt_br2" 00:21:40.445 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # true 00:21:40.445 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:21:40.445 Cannot find device "nvmf_init_br" 00:21:40.445 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # true 00:21:40.445 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:21:40.445 Cannot find device "nvmf_init_br2" 00:21:40.445 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # true 00:21:40.445 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:21:40.445 Cannot find device "nvmf_tgt_br" 00:21:40.445 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # true 00:21:40.445 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:21:40.445 Cannot find device "nvmf_tgt_br2" 00:21:40.445 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # true 00:21:40.445 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:21:40.445 Cannot find device "nvmf_br" 00:21:40.445 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # true 00:21:40.445 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:21:40.445 Cannot find device "nvmf_init_if" 00:21:40.445 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # true 00:21:40.445 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:21:40.445 Cannot find device "nvmf_init_if2" 00:21:40.445 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # true 00:21:40.445 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:40.445 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:40.445 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # true 00:21:40.445 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:40.445 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:40.445 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # true 00:21:40.445 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:21:40.445 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:40.445 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:21:40.445 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:40.445 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:40.445 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:40.445 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:40.705 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:40.705 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:21:40.705 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:21:40.705 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:21:40.705 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:21:40.705 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:21:40.705 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:21:40.705 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:21:40.705 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:21:40.705 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:21:40.705 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:40.705 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:40.705 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:40.705 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:21:40.705 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:21:40.705 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:21:40.705 19:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:21:40.705 19:41:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:40.705 19:41:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:40.705 19:41:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:40.705 19:41:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:21:40.705 19:41:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:21:40.705 19:41:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:21:40.705 19:41:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:40.705 19:41:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:40.705 19:41:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:21:40.705 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:40.705 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:21:40.705 00:21:40.705 --- 10.0.0.3 ping statistics --- 00:21:40.705 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:40.705 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:21:40.705 19:41:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:21:40.705 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:40.705 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.071 ms 00:21:40.705 00:21:40.705 --- 10.0.0.4 ping statistics --- 00:21:40.705 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:40.705 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:21:40.705 19:41:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:40.705 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:40.705 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:21:40.705 00:21:40.705 --- 10.0.0.1 ping statistics --- 00:21:40.705 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:40.705 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:21:40.705 19:41:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:21:40.705 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:40.705 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.049 ms 00:21:40.705 00:21:40.705 --- 10.0.0.2 ping statistics --- 00:21:40.705 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:40.705 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:21:40.705 19:41:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:40.705 19:41:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@461 -- # return 0 00:21:40.705 19:41:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:40.705 19:41:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:40.705 19:41:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:40.705 19:41:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:40.705 19:41:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:40.705 19:41:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:40.705 19:41:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:40.705 19:41:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:21:40.705 19:41:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:40.705 19:41:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:40.705 19:41:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:40.705 19:41:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@509 -- # nvmfpid=81945 00:21:40.705 19:41:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@510 -- # waitforlisten 81945 00:21:40.705 19:41:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:21:40.705 19:41:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 81945 ']' 00:21:40.705 19:41:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:40.706 19:41:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:40.706 19:41:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:40.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:40.706 19:41:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:40.706 19:41:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:40.964 [2024-12-05 19:41:34.147828] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:21:40.964 [2024-12-05 19:41:34.147922] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:40.964 [2024-12-05 19:41:34.288283] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:40.964 [2024-12-05 19:41:34.344640] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:40.964 [2024-12-05 19:41:34.344706] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:40.964 [2024-12-05 19:41:34.344733] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:40.964 [2024-12-05 19:41:34.344742] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:40.964 [2024-12-05 19:41:34.344749] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:40.964 [2024-12-05 19:41:34.345947] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:40.964 [2024-12-05 19:41:34.345957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:40.964 [2024-12-05 19:41:34.400019] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:41.223 19:41:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:41.223 19:41:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:21:41.223 19:41:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:41.223 19:41:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:41.223 19:41:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:41.223 19:41:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:41.223 19:41:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:41.223 19:41:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:41.481 [2024-12-05 19:41:34.801829] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:41.481 19:41:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:41.740 Malloc0 00:21:41.741 19:41:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:41.998 19:41:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:42.256 19:41:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:21:42.514 [2024-12-05 19:41:35.904404] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:42.514 19:41:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:21:42.514 19:41:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=81987 00:21:42.514 19:41:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 81987 /var/tmp/bdevperf.sock 00:21:42.514 19:41:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 81987 ']' 00:21:42.514 19:41:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:42.514 19:41:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:42.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:42.514 19:41:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:42.514 19:41:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:42.514 19:41:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:42.773 [2024-12-05 19:41:35.968141] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:21:42.773 [2024-12-05 19:41:35.968226] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81987 ] 00:21:42.773 [2024-12-05 19:41:36.119346] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:42.773 [2024-12-05 19:41:36.180627] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:43.032 [2024-12-05 19:41:36.238135] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:43.677 19:41:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:43.677 19:41:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:21:43.677 19:41:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:21:43.935 19:41:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:21:44.194 NVMe0n1 00:21:44.194 19:41:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:44.194 19:41:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=82010 00:21:44.194 19:41:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:21:44.194 Running I/O for 10 seconds... 00:21:45.130 19:41:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:21:45.391 7189.00 IOPS, 28.08 MiB/s [2024-12-05T19:41:38.828Z] [2024-12-05 19:41:38.770756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:67096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.391 [2024-12-05 19:41:38.770812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.391 [2024-12-05 19:41:38.770853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:67104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.391 [2024-12-05 19:41:38.770864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.391 [2024-12-05 19:41:38.770876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:67112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.391 [2024-12-05 19:41:38.770886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.391 [2024-12-05 19:41:38.770897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:67120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.391 [2024-12-05 19:41:38.770906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.391 [2024-12-05 19:41:38.770917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:67128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.391 [2024-12-05 19:41:38.770926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.391 [2024-12-05 19:41:38.770937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:67136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.391 [2024-12-05 19:41:38.770947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.391 [2024-12-05 19:41:38.770958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:67144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.391 [2024-12-05 19:41:38.770966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.391 [2024-12-05 19:41:38.770977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:67152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.391 [2024-12-05 19:41:38.770987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.391 [2024-12-05 19:41:38.770998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:67160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.391 [2024-12-05 19:41:38.771007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.391 [2024-12-05 19:41:38.771018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:67168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.391 [2024-12-05 19:41:38.771027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.391 [2024-12-05 19:41:38.771038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:67176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.391 [2024-12-05 19:41:38.771047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.391 [2024-12-05 19:41:38.771058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:67184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.391 [2024-12-05 19:41:38.771067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.391 [2024-12-05 19:41:38.771078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:67192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.391 [2024-12-05 19:41:38.771086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.392 [2024-12-05 19:41:38.771105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:67200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.392 [2024-12-05 19:41:38.771115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.392 [2024-12-05 19:41:38.771144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:67208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.392 [2024-12-05 19:41:38.771173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.392 [2024-12-05 19:41:38.771185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:67216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.392 [2024-12-05 19:41:38.771195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.392 [2024-12-05 19:41:38.771207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:67224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.392 [2024-12-05 19:41:38.771216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.392 [2024-12-05 19:41:38.771230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:67232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.392 [2024-12-05 19:41:38.771240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.392 [2024-12-05 19:41:38.771251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:67240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.392 [2024-12-05 19:41:38.771262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.392 [2024-12-05 19:41:38.771273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:67248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.392 [2024-12-05 19:41:38.771283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.392 [2024-12-05 19:41:38.771294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:67256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.392 [2024-12-05 19:41:38.771303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.392 [2024-12-05 19:41:38.771315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.392 [2024-12-05 19:41:38.771323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.392 [2024-12-05 19:41:38.771334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:67272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.392 [2024-12-05 19:41:38.771370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.392 [2024-12-05 19:41:38.771381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:67280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.392 [2024-12-05 19:41:38.771390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.392 [2024-12-05 19:41:38.771401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:67288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.392 [2024-12-05 19:41:38.771410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.392 [2024-12-05 19:41:38.771421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:67296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.392 [2024-12-05 19:41:38.771429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.392 [2024-12-05 19:41:38.771440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:67304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.392 [2024-12-05 19:41:38.771449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.392 [2024-12-05 19:41:38.771460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:67312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.392 [2024-12-05 19:41:38.771469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.392 [2024-12-05 19:41:38.771480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:67320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.392 [2024-12-05 19:41:38.771489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.392 [2024-12-05 19:41:38.771500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:67328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.392 [2024-12-05 19:41:38.771509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.392 [2024-12-05 19:41:38.771519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:67336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.392 [2024-12-05 19:41:38.771528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.392 [2024-12-05 19:41:38.771539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:67344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.392 [2024-12-05 19:41:38.771548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.392 [2024-12-05 19:41:38.771563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:67352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.392 [2024-12-05 19:41:38.771572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.392 [2024-12-05 19:41:38.771584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:67360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.392 [2024-12-05 19:41:38.771594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.392 [2024-12-05 19:41:38.771605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:67368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.392 [2024-12-05 19:41:38.771614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.392 [2024-12-05 19:41:38.771625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:67376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.392 [2024-12-05 19:41:38.771633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.392 [2024-12-05 19:41:38.771644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:67384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.392 [2024-12-05 19:41:38.771653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.392 [2024-12-05 19:41:38.771664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:67392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.392 [2024-12-05 19:41:38.771673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.392 [2024-12-05 19:41:38.771684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:67400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.392 [2024-12-05 19:41:38.771692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.392 [2024-12-05 19:41:38.771703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:67408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.392 [2024-12-05 19:41:38.771714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.392 [2024-12-05 19:41:38.771726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:67416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.392 [2024-12-05 19:41:38.771735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.392 [2024-12-05 19:41:38.771746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:67424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.392 [2024-12-05 19:41:38.771755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.392 [2024-12-05 19:41:38.771766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:67432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.392 [2024-12-05 19:41:38.771775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.392 [2024-12-05 19:41:38.771787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:67440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.392 [2024-12-05 19:41:38.771796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.392 [2024-12-05 19:41:38.771806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:67448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.392 [2024-12-05 19:41:38.771815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.392 [2024-12-05 19:41:38.771826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:67456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.392 [2024-12-05 19:41:38.771835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.392 [2024-12-05 19:41:38.771846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:67464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.392 [2024-12-05 19:41:38.771855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.392 [2024-12-05 19:41:38.771866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:67472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.392 [2024-12-05 19:41:38.771876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.392 [2024-12-05 19:41:38.771886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:67480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.392 [2024-12-05 19:41:38.771895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.392 [2024-12-05 19:41:38.771907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:67488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.392 [2024-12-05 19:41:38.771916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.392 [2024-12-05 19:41:38.771927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:67496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.392 [2024-12-05 19:41:38.771947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.392 [2024-12-05 19:41:38.771959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:67504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.392 [2024-12-05 19:41:38.771968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.393 [2024-12-05 19:41:38.771988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:67512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.393 [2024-12-05 19:41:38.771997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.393 [2024-12-05 19:41:38.772008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:67520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.393 [2024-12-05 19:41:38.772016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.393 [2024-12-05 19:41:38.772027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:67528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.393 [2024-12-05 19:41:38.772036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.393 [2024-12-05 19:41:38.772047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:67536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.393 [2024-12-05 19:41:38.772056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.393 [2024-12-05 19:41:38.772067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:67544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.393 [2024-12-05 19:41:38.772075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.393 [2024-12-05 19:41:38.772087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:67552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.393 [2024-12-05 19:41:38.772096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.393 [2024-12-05 19:41:38.772107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:67560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.393 [2024-12-05 19:41:38.772127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.393 [2024-12-05 19:41:38.772139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:67568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.393 [2024-12-05 19:41:38.772149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.393 [2024-12-05 19:41:38.772161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:67576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.393 [2024-12-05 19:41:38.772171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.393 [2024-12-05 19:41:38.772182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:67584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.393 [2024-12-05 19:41:38.772191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.393 [2024-12-05 19:41:38.772202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:67592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.393 [2024-12-05 19:41:38.772211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.393 [2024-12-05 19:41:38.772223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:67600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.393 [2024-12-05 19:41:38.772231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.393 [2024-12-05 19:41:38.772242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:67608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.393 [2024-12-05 19:41:38.772252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.393 [2024-12-05 19:41:38.772263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:67616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.393 [2024-12-05 19:41:38.772274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.393 [2024-12-05 19:41:38.772286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:67624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.393 [2024-12-05 19:41:38.772298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.393 [2024-12-05 19:41:38.772309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:67632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.393 [2024-12-05 19:41:38.772318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.393 [2024-12-05 19:41:38.772330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:67640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.393 [2024-12-05 19:41:38.772339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.393 [2024-12-05 19:41:38.772350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:67648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.393 [2024-12-05 19:41:38.772359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.393 [2024-12-05 19:41:38.772370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:67656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.393 [2024-12-05 19:41:38.772379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.393 [2024-12-05 19:41:38.772390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:67664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.393 [2024-12-05 19:41:38.772399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.393 [2024-12-05 19:41:38.772410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:67672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.393 [2024-12-05 19:41:38.772419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.393 [2024-12-05 19:41:38.772430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:67680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.393 [2024-12-05 19:41:38.772439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.393 [2024-12-05 19:41:38.772450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:67688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.393 [2024-12-05 19:41:38.772459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.393 [2024-12-05 19:41:38.772469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:67696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.393 [2024-12-05 19:41:38.772478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.393 [2024-12-05 19:41:38.772490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:67704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.393 [2024-12-05 19:41:38.772499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.393 [2024-12-05 19:41:38.772509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:67712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.393 [2024-12-05 19:41:38.772518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.393 [2024-12-05 19:41:38.772529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:67720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.393 [2024-12-05 19:41:38.772539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.393 [2024-12-05 19:41:38.772551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:66728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.393 [2024-12-05 19:41:38.772560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.393 [2024-12-05 19:41:38.772572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:66736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.393 [2024-12-05 19:41:38.772581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.393 [2024-12-05 19:41:38.772593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:66744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.393 [2024-12-05 19:41:38.772602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.393 [2024-12-05 19:41:38.772613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:66752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.393 [2024-12-05 19:41:38.772623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.393 [2024-12-05 19:41:38.772634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:66760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.393 [2024-12-05 19:41:38.772643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.393 [2024-12-05 19:41:38.772654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:66768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.393 [2024-12-05 19:41:38.772664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.393 [2024-12-05 19:41:38.772675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:66776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.393 [2024-12-05 19:41:38.772684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.394 [2024-12-05 19:41:38.772694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:66784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.394 [2024-12-05 19:41:38.772704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.394 [2024-12-05 19:41:38.772715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:66792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.394 [2024-12-05 19:41:38.772724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.394 [2024-12-05 19:41:38.772735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:66800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.394 [2024-12-05 19:41:38.772744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.394 [2024-12-05 19:41:38.772755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:66808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.394 [2024-12-05 19:41:38.772764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.394 [2024-12-05 19:41:38.772775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:66816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.394 [2024-12-05 19:41:38.772784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.394 [2024-12-05 19:41:38.772795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:66824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.394 [2024-12-05 19:41:38.772804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.394 [2024-12-05 19:41:38.772815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:66832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.394 [2024-12-05 19:41:38.772824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.394 [2024-12-05 19:41:38.772835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:66840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.394 [2024-12-05 19:41:38.772844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.394 [2024-12-05 19:41:38.772855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:67728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.394 [2024-12-05 19:41:38.772865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.394 [2024-12-05 19:41:38.772876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:67736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.394 [2024-12-05 19:41:38.772885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.394 [2024-12-05 19:41:38.772896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:66848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.394 [2024-12-05 19:41:38.772905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.394 [2024-12-05 19:41:38.772924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:66856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.394 [2024-12-05 19:41:38.772933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.394 [2024-12-05 19:41:38.772945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:66864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.394 [2024-12-05 19:41:38.772954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.394 [2024-12-05 19:41:38.772965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:66872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.394 [2024-12-05 19:41:38.772974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.394 [2024-12-05 19:41:38.772986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:66880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.394 [2024-12-05 19:41:38.772995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.394 [2024-12-05 19:41:38.773006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:66888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.394 [2024-12-05 19:41:38.773016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.394 [2024-12-05 19:41:38.773027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:66896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.394 [2024-12-05 19:41:38.773036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.394 [2024-12-05 19:41:38.773047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:67744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.394 [2024-12-05 19:41:38.773056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.394 [2024-12-05 19:41:38.773067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:66904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.394 [2024-12-05 19:41:38.773076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.394 [2024-12-05 19:41:38.773087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:66912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.394 [2024-12-05 19:41:38.773096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.394 [2024-12-05 19:41:38.773107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:66920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.394 [2024-12-05 19:41:38.773127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.394 [2024-12-05 19:41:38.773139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:66928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.394 [2024-12-05 19:41:38.773148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.394 [2024-12-05 19:41:38.773160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:66936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.394 [2024-12-05 19:41:38.773170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.394 [2024-12-05 19:41:38.773181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:66944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.394 [2024-12-05 19:41:38.773190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.394 [2024-12-05 19:41:38.773202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:66952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.394 [2024-12-05 19:41:38.773211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.394 [2024-12-05 19:41:38.773222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:66960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.394 [2024-12-05 19:41:38.773231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.394 [2024-12-05 19:41:38.773243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:66968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.394 [2024-12-05 19:41:38.773252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.394 [2024-12-05 19:41:38.773268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:66976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.394 [2024-12-05 19:41:38.773278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.394 [2024-12-05 19:41:38.773288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:66984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.394 [2024-12-05 19:41:38.773305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.394 [2024-12-05 19:41:38.773317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:66992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.394 [2024-12-05 19:41:38.773326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.394 [2024-12-05 19:41:38.773337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:67000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.394 [2024-12-05 19:41:38.773346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.394 [2024-12-05 19:41:38.773358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:67008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.394 [2024-12-05 19:41:38.773367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.394 [2024-12-05 19:41:38.773378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:67016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.394 [2024-12-05 19:41:38.773387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.394 [2024-12-05 19:41:38.773398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:67024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.394 [2024-12-05 19:41:38.773407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.394 [2024-12-05 19:41:38.773418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:67032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.394 [2024-12-05 19:41:38.773427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.394 [2024-12-05 19:41:38.773438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:67040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.394 [2024-12-05 19:41:38.773447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.394 [2024-12-05 19:41:38.773458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:67048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.394 [2024-12-05 19:41:38.773466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.394 [2024-12-05 19:41:38.773478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:67056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.394 [2024-12-05 19:41:38.773487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.394 [2024-12-05 19:41:38.773498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:67064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.394 [2024-12-05 19:41:38.773507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.394 [2024-12-05 19:41:38.773517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:67072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.394 [2024-12-05 19:41:38.773527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.395 [2024-12-05 19:41:38.773538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:67080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.395 [2024-12-05 19:41:38.773546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.395 [2024-12-05 19:41:38.773557] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2221970 is same with the state(6) to be set 00:21:45.395 [2024-12-05 19:41:38.773569] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:45.395 [2024-12-05 19:41:38.773576] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:45.395 [2024-12-05 19:41:38.773584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67088 len:8 PRP1 0x0 PRP2 0x0 00:21:45.395 [2024-12-05 19:41:38.773598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.395 [2024-12-05 19:41:38.773899] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:21:45.395 [2024-12-05 19:41:38.774144] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21c1e50 (9): Bad file descriptor 00:21:45.395 [2024-12-05 19:41:38.774258] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:21:45.395 [2024-12-05 19:41:38.774280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21c1e50 with addr=10.0.0.3, port=4420 00:21:45.395 [2024-12-05 19:41:38.774292] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c1e50 is same with the state(6) to be set 00:21:45.395 [2024-12-05 19:41:38.774311] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21c1e50 (9): Bad file descriptor 00:21:45.395 [2024-12-05 19:41:38.774328] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:21:45.395 [2024-12-05 19:41:38.774338] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:21:45.395 [2024-12-05 19:41:38.774348] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:21:45.395 [2024-12-05 19:41:38.774359] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:21:45.395 [2024-12-05 19:41:38.774370] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:21:45.395 19:41:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:21:47.267 4170.50 IOPS, 16.29 MiB/s [2024-12-05T19:41:40.963Z] 2780.33 IOPS, 10.86 MiB/s [2024-12-05T19:41:40.963Z] [2024-12-05 19:41:40.774513] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:21:47.526 [2024-12-05 19:41:40.774791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21c1e50 with addr=10.0.0.3, port=4420 00:21:47.526 [2024-12-05 19:41:40.774945] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c1e50 is same with the state(6) to be set 00:21:47.526 [2024-12-05 19:41:40.775248] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21c1e50 (9): Bad file descriptor 00:21:47.526 [2024-12-05 19:41:40.775489] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:21:47.526 [2024-12-05 19:41:40.775664] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:21:47.526 [2024-12-05 19:41:40.775791] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:21:47.526 [2024-12-05 19:41:40.775840] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:21:47.526 [2024-12-05 19:41:40.775982] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:21:47.526 19:41:40 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:21:47.526 19:41:40 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:47.526 19:41:40 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:21:47.785 19:41:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:21:47.785 19:41:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:21:47.785 19:41:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:21:47.785 19:41:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:21:48.044 19:41:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:21:48.044 19:41:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:21:49.238 2085.25 IOPS, 8.15 MiB/s [2024-12-05T19:41:42.934Z] 1668.20 IOPS, 6.52 MiB/s [2024-12-05T19:41:42.934Z] [2024-12-05 19:41:42.776345] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:21:49.497 [2024-12-05 19:41:42.776573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21c1e50 with addr=10.0.0.3, port=4420 00:21:49.497 [2024-12-05 19:41:42.776741] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c1e50 is same with the state(6) to be set 00:21:49.497 [2024-12-05 19:41:42.777017] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21c1e50 (9): Bad file descriptor 00:21:49.497 [2024-12-05 19:41:42.777109] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:21:49.497 [2024-12-05 19:41:42.777286] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:21:49.497 [2024-12-05 19:41:42.777345] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:21:49.497 [2024-12-05 19:41:42.777451] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:21:49.497 [2024-12-05 19:41:42.777517] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:21:51.366 1390.17 IOPS, 5.43 MiB/s [2024-12-05T19:41:44.803Z] 1191.57 IOPS, 4.65 MiB/s [2024-12-05T19:41:44.803Z] [2024-12-05 19:41:44.777707] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:21:51.366 [2024-12-05 19:41:44.777780] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:21:51.366 [2024-12-05 19:41:44.777814] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:21:51.366 [2024-12-05 19:41:44.777826] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] already in failed state 00:21:51.366 [2024-12-05 19:41:44.777838] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:21:52.556 1042.62 IOPS, 4.07 MiB/s 00:21:52.556 Latency(us) 00:21:52.556 [2024-12-05T19:41:45.993Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:52.556 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:52.556 Verification LBA range: start 0x0 length 0x4000 00:21:52.556 NVMe0n1 : 8.17 1021.44 3.99 15.67 0.00 123241.47 3634.27 7015926.69 00:21:52.556 [2024-12-05T19:41:45.993Z] =================================================================================================================== 00:21:52.556 [2024-12-05T19:41:45.993Z] Total : 1021.44 3.99 15.67 0.00 123241.47 3634.27 7015926.69 00:21:52.556 { 00:21:52.556 "results": [ 00:21:52.556 { 00:21:52.556 "job": "NVMe0n1", 00:21:52.556 "core_mask": "0x4", 00:21:52.556 "workload": "verify", 00:21:52.556 "status": "finished", 00:21:52.556 "verify_range": { 00:21:52.556 "start": 0, 00:21:52.556 "length": 16384 00:21:52.556 }, 00:21:52.556 "queue_depth": 128, 00:21:52.556 "io_size": 4096, 00:21:52.556 "runtime": 8.165922, 00:21:52.556 "iops": 1021.4400774339995, 00:21:52.556 "mibps": 3.9900003024765605, 00:21:52.556 "io_failed": 128, 00:21:52.556 "io_timeout": 0, 00:21:52.556 "avg_latency_us": 123241.46664691548, 00:21:52.556 "min_latency_us": 3634.269090909091, 00:21:52.556 "max_latency_us": 7015926.69090909 00:21:52.556 } 00:21:52.556 ], 00:21:52.556 "core_count": 1 00:21:52.556 } 00:21:53.121 19:41:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:21:53.121 19:41:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:53.121 19:41:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:21:53.379 19:41:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:21:53.379 19:41:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:21:53.379 19:41:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:21:53.379 19:41:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:21:53.637 19:41:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:21:53.637 19:41:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@65 -- # wait 82010 00:21:53.637 19:41:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 81987 00:21:53.637 19:41:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 81987 ']' 00:21:53.637 19:41:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 81987 00:21:53.637 19:41:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:21:53.637 19:41:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:53.637 19:41:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81987 00:21:53.637 killing process with pid 81987 00:21:53.637 Received shutdown signal, test time was about 9.401351 seconds 00:21:53.637 00:21:53.637 Latency(us) 00:21:53.637 [2024-12-05T19:41:47.075Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:53.638 [2024-12-05T19:41:47.075Z] =================================================================================================================== 00:21:53.638 [2024-12-05T19:41:47.075Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:53.638 19:41:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:53.638 19:41:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:53.638 19:41:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81987' 00:21:53.638 19:41:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 81987 00:21:53.638 19:41:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 81987 00:21:53.896 19:41:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:21:54.154 [2024-12-05 19:41:47.432484] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:54.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:54.154 19:41:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=82133 00:21:54.154 19:41:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:21:54.154 19:41:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 82133 /var/tmp/bdevperf.sock 00:21:54.154 19:41:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 82133 ']' 00:21:54.154 19:41:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:54.154 19:41:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:54.154 19:41:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:54.154 19:41:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:54.154 19:41:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:54.154 [2024-12-05 19:41:47.510764] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:21:54.154 [2024-12-05 19:41:47.511135] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82133 ] 00:21:54.448 [2024-12-05 19:41:47.661504] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:54.448 [2024-12-05 19:41:47.722501] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:54.448 [2024-12-05 19:41:47.776062] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:54.448 19:41:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:54.448 19:41:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:21:54.448 19:41:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:21:54.719 19:41:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:21:55.285 NVMe0n1 00:21:55.286 19:41:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=82149 00:21:55.286 19:41:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:55.286 19:41:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:21:55.286 Running I/O for 10 seconds... 00:21:56.221 19:41:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:21:56.481 6933.00 IOPS, 27.08 MiB/s [2024-12-05T19:41:49.918Z] [2024-12-05 19:41:49.717843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:63648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.481 [2024-12-05 19:41:49.717906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.481 [2024-12-05 19:41:49.717948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:63776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.481 [2024-12-05 19:41:49.717959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.481 [2024-12-05 19:41:49.717971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:63784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.481 [2024-12-05 19:41:49.717981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.481 [2024-12-05 19:41:49.717992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:63792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.481 [2024-12-05 19:41:49.718002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.481 [2024-12-05 19:41:49.718013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:63800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.481 [2024-12-05 19:41:49.718022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.481 [2024-12-05 19:41:49.718033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:63808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.481 [2024-12-05 19:41:49.718042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.482 [2024-12-05 19:41:49.718053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:63816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.482 [2024-12-05 19:41:49.718062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.482 [2024-12-05 19:41:49.718073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:63824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.482 [2024-12-05 19:41:49.718082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.482 [2024-12-05 19:41:49.718093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:63832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.482 [2024-12-05 19:41:49.718102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.482 [2024-12-05 19:41:49.718113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:63840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.482 [2024-12-05 19:41:49.718122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.482 [2024-12-05 19:41:49.718171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:63848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.482 [2024-12-05 19:41:49.718183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.482 [2024-12-05 19:41:49.718194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:63856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.482 [2024-12-05 19:41:49.718204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.482 [2024-12-05 19:41:49.718215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:63864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.482 [2024-12-05 19:41:49.718230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.482 [2024-12-05 19:41:49.718249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:63872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.482 [2024-12-05 19:41:49.718259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.482 [2024-12-05 19:41:49.718270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:63880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.482 [2024-12-05 19:41:49.718280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.482 [2024-12-05 19:41:49.718291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:63888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.482 [2024-12-05 19:41:49.718301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.482 [2024-12-05 19:41:49.718312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:63896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.482 [2024-12-05 19:41:49.718322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.482 [2024-12-05 19:41:49.718336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:63904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.482 [2024-12-05 19:41:49.718346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.482 [2024-12-05 19:41:49.718358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:63912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.482 [2024-12-05 19:41:49.718368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.482 [2024-12-05 19:41:49.718379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.482 [2024-12-05 19:41:49.718388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.482 [2024-12-05 19:41:49.718399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:63928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.482 [2024-12-05 19:41:49.718408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.482 [2024-12-05 19:41:49.718419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:63936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.482 [2024-12-05 19:41:49.718428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.482 [2024-12-05 19:41:49.718439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:63944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.482 [2024-12-05 19:41:49.718448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.482 [2024-12-05 19:41:49.718459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:63952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.482 [2024-12-05 19:41:49.718468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.482 [2024-12-05 19:41:49.718479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:63960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.482 [2024-12-05 19:41:49.718488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.482 [2024-12-05 19:41:49.718499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:63968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.482 [2024-12-05 19:41:49.718508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.482 [2024-12-05 19:41:49.718519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:63976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.482 [2024-12-05 19:41:49.718527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.482 [2024-12-05 19:41:49.718538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:63984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.482 [2024-12-05 19:41:49.718551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.482 [2024-12-05 19:41:49.718562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:63992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.482 [2024-12-05 19:41:49.718572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.482 [2024-12-05 19:41:49.718582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:64000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.482 [2024-12-05 19:41:49.718591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.482 [2024-12-05 19:41:49.718602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:64008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.482 [2024-12-05 19:41:49.718611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.482 [2024-12-05 19:41:49.718622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:64016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.482 [2024-12-05 19:41:49.718631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.482 [2024-12-05 19:41:49.718642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:64024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.482 [2024-12-05 19:41:49.718651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.482 [2024-12-05 19:41:49.718663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:64032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.482 [2024-12-05 19:41:49.718672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.482 [2024-12-05 19:41:49.718683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:64040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.482 [2024-12-05 19:41:49.718693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.482 [2024-12-05 19:41:49.718704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:64048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.482 [2024-12-05 19:41:49.718713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.482 [2024-12-05 19:41:49.718724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:64056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.482 [2024-12-05 19:41:49.718733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.482 [2024-12-05 19:41:49.718744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:64064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.482 [2024-12-05 19:41:49.718753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.482 [2024-12-05 19:41:49.718768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:64072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.482 [2024-12-05 19:41:49.718777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.482 [2024-12-05 19:41:49.718788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:64080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.482 [2024-12-05 19:41:49.718797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.482 [2024-12-05 19:41:49.718808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:64088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.482 [2024-12-05 19:41:49.718817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.482 [2024-12-05 19:41:49.718827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:64096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.482 [2024-12-05 19:41:49.718836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.482 [2024-12-05 19:41:49.718847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:64104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.482 [2024-12-05 19:41:49.718856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.483 [2024-12-05 19:41:49.718868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:64112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.483 [2024-12-05 19:41:49.718877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.483 [2024-12-05 19:41:49.718888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:64120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.483 [2024-12-05 19:41:49.718897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.483 [2024-12-05 19:41:49.718909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:64128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.483 [2024-12-05 19:41:49.718917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.483 [2024-12-05 19:41:49.718929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:64136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.483 [2024-12-05 19:41:49.718938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.483 [2024-12-05 19:41:49.718949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:64144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.483 [2024-12-05 19:41:49.718958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.483 [2024-12-05 19:41:49.718969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:64152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.483 [2024-12-05 19:41:49.718978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.483 [2024-12-05 19:41:49.718990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:64160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.483 [2024-12-05 19:41:49.718999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.483 [2024-12-05 19:41:49.719011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:64168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.483 [2024-12-05 19:41:49.719020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.483 [2024-12-05 19:41:49.719031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:64176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.483 [2024-12-05 19:41:49.719040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.483 [2024-12-05 19:41:49.719061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:64184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.483 [2024-12-05 19:41:49.719070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.483 [2024-12-05 19:41:49.719082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:64192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.483 [2024-12-05 19:41:49.719091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.483 [2024-12-05 19:41:49.719102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:64200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.483 [2024-12-05 19:41:49.719111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.483 [2024-12-05 19:41:49.719122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:64208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.483 [2024-12-05 19:41:49.719141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.483 [2024-12-05 19:41:49.719154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:64216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.483 [2024-12-05 19:41:49.719163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.483 [2024-12-05 19:41:49.719174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:64224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.483 [2024-12-05 19:41:49.719183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.483 [2024-12-05 19:41:49.719194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:64232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.483 [2024-12-05 19:41:49.719204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.483 [2024-12-05 19:41:49.719215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:64240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.483 [2024-12-05 19:41:49.719224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.483 [2024-12-05 19:41:49.719235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:64248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.483 [2024-12-05 19:41:49.719244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.483 [2024-12-05 19:41:49.719255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:64256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.483 [2024-12-05 19:41:49.719264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.483 [2024-12-05 19:41:49.719275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:64264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.483 [2024-12-05 19:41:49.719284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.483 [2024-12-05 19:41:49.719295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:64272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.483 [2024-12-05 19:41:49.719304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.483 [2024-12-05 19:41:49.719315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:64280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.483 [2024-12-05 19:41:49.719324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.483 [2024-12-05 19:41:49.719336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:64288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.483 [2024-12-05 19:41:49.719346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.483 [2024-12-05 19:41:49.719357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:64296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.483 [2024-12-05 19:41:49.719367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.483 [2024-12-05 19:41:49.719378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:64304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.483 [2024-12-05 19:41:49.719387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.483 [2024-12-05 19:41:49.719399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:64312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.483 [2024-12-05 19:41:49.719408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.483 [2024-12-05 19:41:49.719419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:64320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.483 [2024-12-05 19:41:49.719428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.483 [2024-12-05 19:41:49.719439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:64328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.483 [2024-12-05 19:41:49.719448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.483 [2024-12-05 19:41:49.719459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:64336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.483 [2024-12-05 19:41:49.719468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.483 [2024-12-05 19:41:49.719479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:64344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.483 [2024-12-05 19:41:49.719488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.483 [2024-12-05 19:41:49.719500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:64352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.483 [2024-12-05 19:41:49.719509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.483 [2024-12-05 19:41:49.719520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:64360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.483 [2024-12-05 19:41:49.719529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.483 [2024-12-05 19:41:49.719540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:64368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.483 [2024-12-05 19:41:49.719550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.483 [2024-12-05 19:41:49.719561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:64376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.483 [2024-12-05 19:41:49.719570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.483 [2024-12-05 19:41:49.719581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:64384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.483 [2024-12-05 19:41:49.719590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.483 [2024-12-05 19:41:49.719601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:64392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.483 [2024-12-05 19:41:49.719610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.483 [2024-12-05 19:41:49.719621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:64400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.483 [2024-12-05 19:41:49.719630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.483 [2024-12-05 19:41:49.719641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:64408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.483 [2024-12-05 19:41:49.719650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.483 [2024-12-05 19:41:49.719662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:64416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.483 [2024-12-05 19:41:49.719672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.483 [2024-12-05 19:41:49.719683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:64424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.483 [2024-12-05 19:41:49.719692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.484 [2024-12-05 19:41:49.719703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:64432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.484 [2024-12-05 19:41:49.719712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.484 [2024-12-05 19:41:49.719723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.484 [2024-12-05 19:41:49.719732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.484 [2024-12-05 19:41:49.719744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:64448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.484 [2024-12-05 19:41:49.719753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.484 [2024-12-05 19:41:49.719764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:64456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.484 [2024-12-05 19:41:49.719773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.484 [2024-12-05 19:41:49.719784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:64464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.484 [2024-12-05 19:41:49.719793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.484 [2024-12-05 19:41:49.719804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:64472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.484 [2024-12-05 19:41:49.719813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.484 [2024-12-05 19:41:49.719824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:64480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.484 [2024-12-05 19:41:49.719833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.484 [2024-12-05 19:41:49.719844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:64488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.484 [2024-12-05 19:41:49.719859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.484 [2024-12-05 19:41:49.719870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:64496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.484 [2024-12-05 19:41:49.719879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.484 [2024-12-05 19:41:49.719890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:64504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.484 [2024-12-05 19:41:49.719899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.484 [2024-12-05 19:41:49.719910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:64512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.484 [2024-12-05 19:41:49.719920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.484 [2024-12-05 19:41:49.719931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:64520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.484 [2024-12-05 19:41:49.719953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.484 [2024-12-05 19:41:49.719967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:64528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.484 [2024-12-05 19:41:49.719976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.484 [2024-12-05 19:41:49.719987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:64536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.484 [2024-12-05 19:41:49.719996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.484 [2024-12-05 19:41:49.720020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:64544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.484 [2024-12-05 19:41:49.720030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.484 [2024-12-05 19:41:49.720041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:64552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.484 [2024-12-05 19:41:49.720050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.484 [2024-12-05 19:41:49.720061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:64560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.484 [2024-12-05 19:41:49.720070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.484 [2024-12-05 19:41:49.720081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:64568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.484 [2024-12-05 19:41:49.720090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.484 [2024-12-05 19:41:49.720101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:64576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.484 [2024-12-05 19:41:49.720119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.484 [2024-12-05 19:41:49.720132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:64584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.484 [2024-12-05 19:41:49.720142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.484 [2024-12-05 19:41:49.720153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:64592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.484 [2024-12-05 19:41:49.720162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.484 [2024-12-05 19:41:49.720173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:64600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.484 [2024-12-05 19:41:49.720182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.484 [2024-12-05 19:41:49.720194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:64608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.484 [2024-12-05 19:41:49.720203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.484 [2024-12-05 19:41:49.720214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:64616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.484 [2024-12-05 19:41:49.720223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.484 [2024-12-05 19:41:49.720234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:64624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.484 [2024-12-05 19:41:49.720244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.484 [2024-12-05 19:41:49.720255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:64632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.484 [2024-12-05 19:41:49.720264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.484 [2024-12-05 19:41:49.720275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:64640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.484 [2024-12-05 19:41:49.720284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.484 [2024-12-05 19:41:49.720295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:64648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.484 [2024-12-05 19:41:49.720304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.484 [2024-12-05 19:41:49.720315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:63656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.484 [2024-12-05 19:41:49.720324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.484 [2024-12-05 19:41:49.720335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:63664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.484 [2024-12-05 19:41:49.720345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.484 [2024-12-05 19:41:49.720362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:63672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.484 [2024-12-05 19:41:49.720372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.484 [2024-12-05 19:41:49.720384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:63680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.484 [2024-12-05 19:41:49.720393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.484 [2024-12-05 19:41:49.720404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:63688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.484 [2024-12-05 19:41:49.720413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.484 [2024-12-05 19:41:49.720425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:63696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.484 [2024-12-05 19:41:49.720434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.484 [2024-12-05 19:41:49.720445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:63704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.484 [2024-12-05 19:41:49.720455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.484 [2024-12-05 19:41:49.720466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:63712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.484 [2024-12-05 19:41:49.720475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.484 [2024-12-05 19:41:49.720486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:63720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.484 [2024-12-05 19:41:49.720495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.484 [2024-12-05 19:41:49.720506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:63728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.484 [2024-12-05 19:41:49.720515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.485 [2024-12-05 19:41:49.720526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:63736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.485 [2024-12-05 19:41:49.720536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.485 [2024-12-05 19:41:49.720547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:63744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.485 [2024-12-05 19:41:49.720556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.485 [2024-12-05 19:41:49.720567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:63752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.485 [2024-12-05 19:41:49.720577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.485 [2024-12-05 19:41:49.720588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:63760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.485 [2024-12-05 19:41:49.720597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.485 [2024-12-05 19:41:49.720608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:63768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.485 [2024-12-05 19:41:49.720618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.485 [2024-12-05 19:41:49.720629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:64656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.485 [2024-12-05 19:41:49.720646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.485 [2024-12-05 19:41:49.720657] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe00970 is same with the state(6) to be set 00:21:56.485 [2024-12-05 19:41:49.720669] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:56.485 [2024-12-05 19:41:49.720677] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:56.485 [2024-12-05 19:41:49.720685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64664 len:8 PRP1 0x0 PRP2 0x0 00:21:56.485 [2024-12-05 19:41:49.720699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.485 [2024-12-05 19:41:49.720996] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:21:56.485 [2024-12-05 19:41:49.721152] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xda0e50 (9): Bad file descriptor 00:21:56.485 [2024-12-05 19:41:49.721260] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.485 [2024-12-05 19:41:49.721283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda0e50 with addr=10.0.0.3, port=4420 00:21:56.485 [2024-12-05 19:41:49.721294] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda0e50 is same with the state(6) to be set 00:21:56.485 [2024-12-05 19:41:49.721312] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xda0e50 (9): Bad file descriptor 00:21:56.485 [2024-12-05 19:41:49.721329] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:21:56.485 [2024-12-05 19:41:49.721338] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:21:56.485 [2024-12-05 19:41:49.721349] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:21:56.485 [2024-12-05 19:41:49.721359] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:21:56.485 [2024-12-05 19:41:49.721370] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:21:56.485 19:41:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:21:57.420 3978.00 IOPS, 15.54 MiB/s [2024-12-05T19:41:50.857Z] [2024-12-05 19:41:50.721542] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.420 [2024-12-05 19:41:50.721877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda0e50 with addr=10.0.0.3, port=4420 00:21:57.420 [2024-12-05 19:41:50.722030] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda0e50 is same with the state(6) to be set 00:21:57.420 [2024-12-05 19:41:50.722215] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xda0e50 (9): Bad file descriptor 00:21:57.420 [2024-12-05 19:41:50.722386] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:21:57.420 [2024-12-05 19:41:50.722456] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:21:57.420 [2024-12-05 19:41:50.722702] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:21:57.420 [2024-12-05 19:41:50.722753] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:21:57.420 [2024-12-05 19:41:50.722888] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:21:57.420 19:41:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:21:57.679 [2024-12-05 19:41:50.990781] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:57.679 19:41:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@92 -- # wait 82149 00:21:58.502 2652.00 IOPS, 10.36 MiB/s [2024-12-05T19:41:51.939Z] [2024-12-05 19:41:51.737072] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:22:00.373 1989.00 IOPS, 7.77 MiB/s [2024-12-05T19:41:54.746Z] 3154.00 IOPS, 12.32 MiB/s [2024-12-05T19:41:55.692Z] 4191.83 IOPS, 16.37 MiB/s [2024-12-05T19:41:56.627Z] 4949.86 IOPS, 19.34 MiB/s [2024-12-05T19:41:58.004Z] 5520.75 IOPS, 21.57 MiB/s [2024-12-05T19:41:58.940Z] 5969.00 IOPS, 23.32 MiB/s [2024-12-05T19:41:58.940Z] 6332.10 IOPS, 24.73 MiB/s 00:22:05.503 Latency(us) 00:22:05.503 [2024-12-05T19:41:58.940Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:05.503 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:05.503 Verification LBA range: start 0x0 length 0x4000 00:22:05.503 NVMe0n1 : 10.01 6338.70 24.76 0.00 0.00 20151.67 1251.14 3019898.88 00:22:05.503 [2024-12-05T19:41:58.940Z] =================================================================================================================== 00:22:05.503 [2024-12-05T19:41:58.940Z] Total : 6338.70 24.76 0.00 0.00 20151.67 1251.14 3019898.88 00:22:05.503 { 00:22:05.503 "results": [ 00:22:05.503 { 00:22:05.503 "job": "NVMe0n1", 00:22:05.503 "core_mask": "0x4", 00:22:05.503 "workload": "verify", 00:22:05.503 "status": "finished", 00:22:05.503 "verify_range": { 00:22:05.503 "start": 0, 00:22:05.503 "length": 16384 00:22:05.503 }, 00:22:05.503 "queue_depth": 128, 00:22:05.503 "io_size": 4096, 00:22:05.503 "runtime": 10.009787, 00:22:05.503 "iops": 6338.696317913658, 00:22:05.503 "mibps": 24.760532491850228, 00:22:05.503 "io_failed": 0, 00:22:05.503 "io_timeout": 0, 00:22:05.503 "avg_latency_us": 20151.66869081682, 00:22:05.503 "min_latency_us": 1251.1418181818183, 00:22:05.503 "max_latency_us": 3019898.88 00:22:05.503 } 00:22:05.503 ], 00:22:05.503 "core_count": 1 00:22:05.503 } 00:22:05.503 19:41:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=82255 00:22:05.503 19:41:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:05.503 19:41:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:22:05.503 Running I/O for 10 seconds... 00:22:06.444 19:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:06.444 7060.00 IOPS, 27.58 MiB/s [2024-12-05T19:41:59.881Z] [2024-12-05 19:41:59.861150] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b748c0 is same with the state(6) to be set 00:22:06.444 [2024-12-05 19:41:59.861389] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b748c0 is same with the state(6) to be set 00:22:06.444 [2024-12-05 19:41:59.861531] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b748c0 is same with the state(6) to be set 00:22:06.444 [2024-12-05 19:41:59.861661] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b748c0 is same with the state(6) to be set 00:22:06.444 [2024-12-05 19:41:59.861776] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b748c0 is same with the state(6) to be set 00:22:06.444 [2024-12-05 19:41:59.861918] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b748c0 is same with the state(6) to be set 00:22:06.444 [2024-12-05 19:41:59.862058] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b748c0 is same with the state(6) to be set 00:22:06.444 [2024-12-05 19:41:59.862231] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b748c0 is same with the state(6) to be set 00:22:06.444 [2024-12-05 19:41:59.862392] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b748c0 is same with the state(6) to be set 00:22:06.444 [2024-12-05 19:41:59.862525] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b748c0 is same with the state(6) to be set 00:22:06.444 [2024-12-05 19:41:59.862585] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b748c0 is same with the state(6) to be set 00:22:06.444 [2024-12-05 19:41:59.862716] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b748c0 is same with the state(6) to be set 00:22:06.444 [2024-12-05 19:41:59.862778] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b748c0 is same with the state(6) to be set 00:22:06.444 [2024-12-05 19:41:59.862832] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b748c0 is same with the state(6) to be set 00:22:06.444 [2024-12-05 19:41:59.862881] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b748c0 is same with the state(6) to be set 00:22:06.444 [2024-12-05 19:41:59.863004] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b748c0 is same with the state(6) to be set 00:22:06.444 [2024-12-05 19:41:59.863076] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b748c0 is same with the state(6) to be set 00:22:06.444 [2024-12-05 19:41:59.863193] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b748c0 is same with [2024-12-05 19:41:59.863238] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.444 [2024-12-05 19:41:59.863276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.444 [2024-12-05 19:41:59.863291] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.444 [2024-12-05 19:41:59.863301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.444 [2024-12-05 19:41:59.863311] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.444 [2024-12-05 19:41:59.863320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.444 [2024-12-05 19:41:59.863330] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.444 [2024-12-05 19:41:59.863339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.444 [2024-12-05 19:41:59.863349] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda0e50 is same with the state(6) to be set 00:22:06.444 the state(6) to be set 00:22:06.444 [2024-12-05 19:41:59.863485] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b748c0 is same with the state(6) to be set 00:22:06.444 [2024-12-05 19:41:59.863606] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b748c0 is same with the state(6) to be set 00:22:06.444 [2024-12-05 19:41:59.863718] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b748c0 is same with the state(6) to be set 00:22:06.444 [2024-12-05 19:41:59.863776] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b748c0 is same with the state(6) to be set 00:22:06.444 [2024-12-05 19:41:59.863869] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b748c0 is same with the state(6) to be set 00:22:06.444 [2024-12-05 19:41:59.863882] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b748c0 is same with the state(6) to be set 00:22:06.444 [2024-12-05 19:41:59.863890] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b748c0 is same with the state(6) to be set 00:22:06.444 [2024-12-05 19:41:59.863899] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b748c0 is same with the state(6) to be set 00:22:06.444 [2024-12-05 19:41:59.863907] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b748c0 is same with the state(6) to be set 00:22:06.444 [2024-12-05 19:41:59.863915] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b748c0 is same with the state(6) to be set 00:22:06.444 [2024-12-05 19:41:59.863923] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b748c0 is same with the state(6) to be set 00:22:06.444 [2024-12-05 19:41:59.863931] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b748c0 is same with the state(6) to be set 00:22:06.444 [2024-12-05 19:41:59.863938] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b748c0 is same with the state(6) to be set 00:22:06.444 [2024-12-05 19:41:59.863959] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b748c0 is same with the state(6) to be set 00:22:06.444 [2024-12-05 19:41:59.863968] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b748c0 is same with the state(6) to be set 00:22:06.444 [2024-12-05 19:41:59.863977] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b748c0 is same with the state(6) to be set 00:22:06.444 [2024-12-05 19:41:59.863985] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b748c0 is same with the state(6) to be set 00:22:06.444 [2024-12-05 19:41:59.863993] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b748c0 is same with the state(6) to be set 00:22:06.444 [2024-12-05 19:41:59.864001] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b748c0 is same with the state(6) to be set 00:22:06.444 [2024-12-05 19:41:59.864010] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b748c0 is same with the state(6) to be set 00:22:06.444 [2024-12-05 19:41:59.864018] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b748c0 is same with the state(6) to be set 00:22:06.444 [2024-12-05 19:41:59.864027] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b748c0 is same with the state(6) to be set 00:22:06.444 [2024-12-05 19:41:59.864035] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b748c0 is same with the state(6) to be set 00:22:06.444 [2024-12-05 19:41:59.864043] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b748c0 is same with the state(6) to be set 00:22:06.444 [2024-12-05 19:41:59.864052] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b748c0 is same with the state(6) to be set 00:22:06.444 [2024-12-05 19:41:59.864060] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b748c0 is same with the state(6) to be set 00:22:06.444 [2024-12-05 19:41:59.864068] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b748c0 is same with the state(6) to be set 00:22:06.444 [2024-12-05 19:41:59.864076] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b748c0 is same with the state(6) to be set 00:22:06.444 [2024-12-05 19:41:59.864084] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b748c0 is same with the state(6) to be set 00:22:06.444 [2024-12-05 19:41:59.864092] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b748c0 is same with the state(6) to be set 00:22:06.444 [2024-12-05 19:41:59.864100] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b748c0 is same with the state(6) to be set 00:22:06.444 [2024-12-05 19:41:59.864122] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b748c0 is same with the state(6) to be set 00:22:06.444 [2024-12-05 19:41:59.864133] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b748c0 is same with the state(6) to be set 00:22:06.444 [2024-12-05 19:41:59.864141] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b748c0 is same with the state(6) to be set 00:22:06.444 [2024-12-05 19:41:59.864149] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b748c0 is same with the state(6) to be set 00:22:06.444 [2024-12-05 19:41:59.864159] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b748c0 is same with the state(6) to be set 00:22:06.444 [2024-12-05 19:41:59.864167] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b748c0 is same with the state(6) to be set 00:22:06.444 [2024-12-05 19:41:59.864176] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b748c0 is same with the state(6) to be set 00:22:06.444 [2024-12-05 19:41:59.864184] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b748c0 is same with the state(6) to be set 00:22:06.444 [2024-12-05 19:41:59.864192] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b748c0 is same with the state(6) to be set 00:22:06.444 [2024-12-05 19:41:59.864200] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b748c0 is same with the state(6) to be set 00:22:06.444 [2024-12-05 19:41:59.864210] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b748c0 is same with the state(6) to be set 00:22:06.444 [2024-12-05 19:41:59.864218] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b748c0 is same with the state(6) to be set 00:22:06.444 [2024-12-05 19:41:59.864226] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b748c0 is same with the state(6) to be set 00:22:06.444 [2024-12-05 19:41:59.864235] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b748c0 is same with the state(6) to be set 00:22:06.444 [2024-12-05 19:41:59.864243] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b748c0 is same with the state(6) to be set 00:22:06.444 [2024-12-05 19:41:59.864251] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b748c0 is same with the state(6) to be set 00:22:06.444 [2024-12-05 19:41:59.864259] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b748c0 is same with the state(6) to be set 00:22:06.444 [2024-12-05 19:41:59.864267] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b748c0 is same with the state(6) to be set 00:22:06.444 [2024-12-05 19:41:59.864275] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b748c0 is same with the state(6) to be set 00:22:06.444 [2024-12-05 19:41:59.864282] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b748c0 is same with the state(6) to be set 00:22:06.444 [2024-12-05 19:41:59.864296] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b748c0 is same with the state(6) to be set 00:22:06.444 [2024-12-05 19:41:59.864304] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b748c0 is same with the state(6) to be set 00:22:06.444 [2024-12-05 19:41:59.864312] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b748c0 is same with the state(6) to be set 00:22:06.445 [2024-12-05 19:41:59.864320] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b748c0 is same with the state(6) to be set 00:22:06.445 [2024-12-05 19:41:59.864328] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b748c0 is same with the state(6) to be set 00:22:06.445 [2024-12-05 19:41:59.864336] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b748c0 is same with the state(6) to be set 00:22:06.445 [2024-12-05 19:41:59.864344] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b748c0 is same with the state(6) to be set 00:22:06.445 [2024-12-05 19:41:59.864357] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b748c0 is same with the state(6) to be set 00:22:06.445 [2024-12-05 19:41:59.864366] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b748c0 is same with the state(6) to be set 00:22:06.445 [2024-12-05 19:41:59.864374] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b748c0 is same with the state(6) to be set 00:22:06.445 [2024-12-05 19:41:59.864383] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b748c0 is same with the state(6) to be set 00:22:06.445 [2024-12-05 19:41:59.864391] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b748c0 is same with the state(6) to be set 00:22:06.445 [2024-12-05 19:41:59.864399] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b748c0 is same with the state(6) to be set 00:22:06.445 [2024-12-05 19:41:59.864407] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b748c0 is same with the state(6) to be set 00:22:06.445 [2024-12-05 19:41:59.864416] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b748c0 is same with the state(6) to be set 00:22:06.445 [2024-12-05 19:41:59.864424] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b748c0 is same with the state(6) to be set 00:22:06.445 [2024-12-05 19:41:59.864433] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b748c0 is same with the state(6) to be set 00:22:06.445 [2024-12-05 19:41:59.864441] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b748c0 is same with the state(6) to be set 00:22:06.445 [2024-12-05 19:41:59.864449] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b748c0 is same with the state(6) to be set 00:22:06.445 [2024-12-05 19:41:59.864457] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b748c0 is same with the state(6) to be set 00:22:06.445 [2024-12-05 19:41:59.864466] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b748c0 is same with the state(6) to be set 00:22:06.445 [2024-12-05 19:41:59.864474] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b748c0 is same with the state(6) to be set 00:22:06.445 [2024-12-05 19:41:59.864483] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b748c0 is same with the state(6) to be set 00:22:06.445 [2024-12-05 19:41:59.864492] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b748c0 is same with the state(6) to be set 00:22:06.445 [2024-12-05 19:41:59.864500] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b748c0 is same with the state(6) to be set 00:22:06.445 [2024-12-05 19:41:59.864508] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b748c0 is same with the state(6) to be set 00:22:06.445 [2024-12-05 19:41:59.864517] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b748c0 is same with the state(6) to be set 00:22:06.445 [2024-12-05 19:41:59.864525] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b748c0 is same with the state(6) to be set 00:22:06.445 [2024-12-05 19:41:59.864533] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b748c0 is same with the state(6) to be set 00:22:06.445 [2024-12-05 19:41:59.864541] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b748c0 is same with the state(6) to be set 00:22:06.445 [2024-12-05 19:41:59.864549] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b748c0 is same with the state(6) to be set 00:22:06.445 [2024-12-05 19:41:59.864557] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b748c0 is same with the state(6) to be set 00:22:06.445 [2024-12-05 19:41:59.864565] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b748c0 is same with the state(6) to be set 00:22:06.445 [2024-12-05 19:41:59.864573] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b748c0 is same with the state(6) to be set 00:22:06.445 [2024-12-05 19:41:59.864581] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b748c0 is same with the state(6) to be set 00:22:06.445 [2024-12-05 19:41:59.864590] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b748c0 is same with the state(6) to be set 00:22:06.445 [2024-12-05 19:41:59.864598] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b748c0 is same with the state(6) to be set 00:22:06.445 [2024-12-05 19:41:59.864607] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b748c0 is same with the state(6) to be set 00:22:06.445 [2024-12-05 19:41:59.864615] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b748c0 is same with the state(6) to be set 00:22:06.445 [2024-12-05 19:41:59.864624] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b748c0 is same with the state(6) to be set 00:22:06.445 [2024-12-05 19:41:59.864678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:61728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.445 [2024-12-05 19:41:59.864698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.445 [2024-12-05 19:41:59.864717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:61736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.445 [2024-12-05 19:41:59.864728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.445 [2024-12-05 19:41:59.864739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:61744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.445 [2024-12-05 19:41:59.864748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.445 [2024-12-05 19:41:59.864760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:61752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.445 [2024-12-05 19:41:59.864769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.445 [2024-12-05 19:41:59.864780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:61760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.445 [2024-12-05 19:41:59.864789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.445 [2024-12-05 19:41:59.864800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:61768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.445 [2024-12-05 19:41:59.864809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.445 [2024-12-05 19:41:59.864821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:61776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.445 [2024-12-05 19:41:59.864830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.445 [2024-12-05 19:41:59.864841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:61784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.445 [2024-12-05 19:41:59.864850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.445 [2024-12-05 19:41:59.864861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:61792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.445 [2024-12-05 19:41:59.864870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.445 [2024-12-05 19:41:59.864882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:61800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.445 [2024-12-05 19:41:59.864891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.445 [2024-12-05 19:41:59.864909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:61808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.445 [2024-12-05 19:41:59.864918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.445 [2024-12-05 19:41:59.864930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:61816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.445 [2024-12-05 19:41:59.864939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.445 [2024-12-05 19:41:59.864950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:61824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.445 [2024-12-05 19:41:59.864960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.445 [2024-12-05 19:41:59.864971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:61832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.445 [2024-12-05 19:41:59.864980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.445 [2024-12-05 19:41:59.864992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:61840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.445 [2024-12-05 19:41:59.865001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.445 [2024-12-05 19:41:59.865012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:61848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.445 [2024-12-05 19:41:59.865021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.445 [2024-12-05 19:41:59.865032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:61856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.445 [2024-12-05 19:41:59.865044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.445 [2024-12-05 19:41:59.865056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:61864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.445 [2024-12-05 19:41:59.865066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.445 [2024-12-05 19:41:59.865077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:61872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.445 [2024-12-05 19:41:59.865087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.445 [2024-12-05 19:41:59.865098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:61880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.445 [2024-12-05 19:41:59.865107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.445 [2024-12-05 19:41:59.865141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:61888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.445 [2024-12-05 19:41:59.865151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.445 [2024-12-05 19:41:59.865163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:61896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.445 [2024-12-05 19:41:59.865172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.446 [2024-12-05 19:41:59.865184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:61904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.446 [2024-12-05 19:41:59.865194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.446 [2024-12-05 19:41:59.865206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:61912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.446 [2024-12-05 19:41:59.865215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.446 [2024-12-05 19:41:59.865227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:61920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.446 [2024-12-05 19:41:59.865236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.446 [2024-12-05 19:41:59.865248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:61928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.446 [2024-12-05 19:41:59.865257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.446 [2024-12-05 19:41:59.865268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:61936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.446 [2024-12-05 19:41:59.865278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.446 [2024-12-05 19:41:59.865289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:61944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.446 [2024-12-05 19:41:59.865299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.446 [2024-12-05 19:41:59.865310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:61952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.446 [2024-12-05 19:41:59.865319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.446 [2024-12-05 19:41:59.865331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:61960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.446 [2024-12-05 19:41:59.865340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.446 [2024-12-05 19:41:59.865351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:61968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.446 [2024-12-05 19:41:59.865363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.446 [2024-12-05 19:41:59.865375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:61976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.446 [2024-12-05 19:41:59.865384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.446 [2024-12-05 19:41:59.865395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:61984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.446 [2024-12-05 19:41:59.865415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.446 [2024-12-05 19:41:59.865426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:61992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.446 [2024-12-05 19:41:59.865436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.446 [2024-12-05 19:41:59.865447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:62000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.446 [2024-12-05 19:41:59.865456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.446 [2024-12-05 19:41:59.865468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:62008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.446 [2024-12-05 19:41:59.865477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.446 [2024-12-05 19:41:59.865492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:62016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.446 [2024-12-05 19:41:59.865501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.446 [2024-12-05 19:41:59.865512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:62024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.446 [2024-12-05 19:41:59.865522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.446 [2024-12-05 19:41:59.865533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:62032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.446 [2024-12-05 19:41:59.865542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.446 [2024-12-05 19:41:59.865553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:62040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.446 [2024-12-05 19:41:59.865561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.446 [2024-12-05 19:41:59.865573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:62048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.446 [2024-12-05 19:41:59.865581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.446 [2024-12-05 19:41:59.865592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:62056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.446 [2024-12-05 19:41:59.865601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.446 [2024-12-05 19:41:59.865613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:62064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.446 [2024-12-05 19:41:59.865622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.446 [2024-12-05 19:41:59.865633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:62072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.446 [2024-12-05 19:41:59.865642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.446 [2024-12-05 19:41:59.865653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:62080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.446 [2024-12-05 19:41:59.865662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.446 [2024-12-05 19:41:59.865673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:62088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.446 [2024-12-05 19:41:59.865682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.446 [2024-12-05 19:41:59.865694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:62096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.446 [2024-12-05 19:41:59.865703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.446 [2024-12-05 19:41:59.865715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:62104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.446 [2024-12-05 19:41:59.865724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.446 [2024-12-05 19:41:59.865735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:62112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.446 [2024-12-05 19:41:59.865749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.446 [2024-12-05 19:41:59.865761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:62120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.446 [2024-12-05 19:41:59.865770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.446 [2024-12-05 19:41:59.865782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:62128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.446 [2024-12-05 19:41:59.865791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.446 [2024-12-05 19:41:59.865803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:62136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.446 [2024-12-05 19:41:59.865812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.446 [2024-12-05 19:41:59.865824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:62144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.446 [2024-12-05 19:41:59.865833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.446 [2024-12-05 19:41:59.865845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:62152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.446 [2024-12-05 19:41:59.865854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.446 [2024-12-05 19:41:59.865865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:62160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.446 [2024-12-05 19:41:59.865875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.446 [2024-12-05 19:41:59.865886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:62168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.446 [2024-12-05 19:41:59.865895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.446 [2024-12-05 19:41:59.865906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:62176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.446 [2024-12-05 19:41:59.865915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.446 [2024-12-05 19:41:59.865926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:62184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.446 [2024-12-05 19:41:59.865935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.446 [2024-12-05 19:41:59.865947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:62192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.446 [2024-12-05 19:41:59.865956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.446 [2024-12-05 19:41:59.865967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:62200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.446 [2024-12-05 19:41:59.865976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.446 [2024-12-05 19:41:59.865988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:62208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.446 [2024-12-05 19:41:59.865997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.447 [2024-12-05 19:41:59.866008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:62216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.447 [2024-12-05 19:41:59.866017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.447 [2024-12-05 19:41:59.866028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:62224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.447 [2024-12-05 19:41:59.866037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.447 [2024-12-05 19:41:59.866048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:62232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.447 [2024-12-05 19:41:59.866058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.447 [2024-12-05 19:41:59.866069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:62240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.447 [2024-12-05 19:41:59.866082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.447 [2024-12-05 19:41:59.866094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:62248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.447 [2024-12-05 19:41:59.866103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.447 [2024-12-05 19:41:59.866126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:62256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.447 [2024-12-05 19:41:59.866136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.447 [2024-12-05 19:41:59.866147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:62264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.447 [2024-12-05 19:41:59.866156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.447 [2024-12-05 19:41:59.866168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:62272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.447 [2024-12-05 19:41:59.866178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.447 [2024-12-05 19:41:59.866190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:62280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.447 [2024-12-05 19:41:59.866199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.447 [2024-12-05 19:41:59.866210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:62288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.447 [2024-12-05 19:41:59.866220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.447 [2024-12-05 19:41:59.866231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:62296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.447 [2024-12-05 19:41:59.866240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.447 [2024-12-05 19:41:59.866251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:62304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.447 [2024-12-05 19:41:59.866260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.447 [2024-12-05 19:41:59.866275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:62312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.447 [2024-12-05 19:41:59.866284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.447 [2024-12-05 19:41:59.866295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:62320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.447 [2024-12-05 19:41:59.866304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.447 [2024-12-05 19:41:59.866315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:62328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.447 [2024-12-05 19:41:59.866324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.447 [2024-12-05 19:41:59.866336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:62336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.447 [2024-12-05 19:41:59.866345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.447 [2024-12-05 19:41:59.866356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:62344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.447 [2024-12-05 19:41:59.866365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.447 [2024-12-05 19:41:59.866377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:62352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.447 [2024-12-05 19:41:59.866386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.447 [2024-12-05 19:41:59.866397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:62360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.447 [2024-12-05 19:41:59.866406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.447 [2024-12-05 19:41:59.866417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:62368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.447 [2024-12-05 19:41:59.866430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.447 [2024-12-05 19:41:59.866446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:62376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.447 [2024-12-05 19:41:59.866454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.447 [2024-12-05 19:41:59.866466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:62384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.447 [2024-12-05 19:41:59.866475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.447 [2024-12-05 19:41:59.866486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:62392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.447 [2024-12-05 19:41:59.866495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.447 [2024-12-05 19:41:59.866506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:62400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.447 [2024-12-05 19:41:59.866516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.447 [2024-12-05 19:41:59.866527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:62408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.447 [2024-12-05 19:41:59.866536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.447 [2024-12-05 19:41:59.866547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:62416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.447 [2024-12-05 19:41:59.866556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.447 [2024-12-05 19:41:59.866568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:62424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.447 [2024-12-05 19:41:59.866576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.447 [2024-12-05 19:41:59.866588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:62432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.447 [2024-12-05 19:41:59.866597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.447 [2024-12-05 19:41:59.866608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:62440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.447 [2024-12-05 19:41:59.866617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.447 [2024-12-05 19:41:59.866629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:62448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.447 [2024-12-05 19:41:59.866638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.447 [2024-12-05 19:41:59.866649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:62456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.447 [2024-12-05 19:41:59.866658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.447 [2024-12-05 19:41:59.866669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:62464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.447 [2024-12-05 19:41:59.866679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.447 [2024-12-05 19:41:59.866690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:62472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.447 [2024-12-05 19:41:59.866699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.447 [2024-12-05 19:41:59.866710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:62480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.447 [2024-12-05 19:41:59.866719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.447 [2024-12-05 19:41:59.866731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:62488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.447 [2024-12-05 19:41:59.866740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.447 [2024-12-05 19:41:59.866751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:62496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.447 [2024-12-05 19:41:59.866765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.447 [2024-12-05 19:41:59.866777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:62504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.447 [2024-12-05 19:41:59.866786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.447 [2024-12-05 19:41:59.866797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:62512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.447 [2024-12-05 19:41:59.866806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.447 [2024-12-05 19:41:59.866817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:62520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.447 [2024-12-05 19:41:59.866826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.447 [2024-12-05 19:41:59.866838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:62528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.448 [2024-12-05 19:41:59.866847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.448 [2024-12-05 19:41:59.866859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:62536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.448 [2024-12-05 19:41:59.866868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.448 [2024-12-05 19:41:59.866879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:62544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.448 [2024-12-05 19:41:59.866888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.448 [2024-12-05 19:41:59.866899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:62552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.448 [2024-12-05 19:41:59.866909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.448 [2024-12-05 19:41:59.866920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:62560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.448 [2024-12-05 19:41:59.866928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.448 [2024-12-05 19:41:59.866940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:62568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.448 [2024-12-05 19:41:59.866949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.448 [2024-12-05 19:41:59.866960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:62576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.448 [2024-12-05 19:41:59.866969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.448 [2024-12-05 19:41:59.866981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:62584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.448 [2024-12-05 19:41:59.866990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.448 [2024-12-05 19:41:59.867001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:62592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.448 [2024-12-05 19:41:59.867010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.448 [2024-12-05 19:41:59.867022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:62600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.448 [2024-12-05 19:41:59.867031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.448 [2024-12-05 19:41:59.867042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:62608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.448 [2024-12-05 19:41:59.867051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.448 [2024-12-05 19:41:59.867062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:62632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.448 [2024-12-05 19:41:59.867071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.448 [2024-12-05 19:41:59.867082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:62640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.448 [2024-12-05 19:41:59.867096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.448 [2024-12-05 19:41:59.867107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:62648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.448 [2024-12-05 19:41:59.867127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.448 [2024-12-05 19:41:59.867146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:62656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.448 [2024-12-05 19:41:59.867155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.448 [2024-12-05 19:41:59.867167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:62664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.448 [2024-12-05 19:41:59.867176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.448 [2024-12-05 19:41:59.867187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:62672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.448 [2024-12-05 19:41:59.867196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.448 [2024-12-05 19:41:59.867207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:62680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.448 [2024-12-05 19:41:59.867216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.448 [2024-12-05 19:41:59.867226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:62688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.448 [2024-12-05 19:41:59.867236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.448 [2024-12-05 19:41:59.867247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:62696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.448 [2024-12-05 19:41:59.867256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.448 [2024-12-05 19:41:59.867267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:62704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.448 [2024-12-05 19:41:59.867276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.448 [2024-12-05 19:41:59.867287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:62712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.448 [2024-12-05 19:41:59.867296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.448 [2024-12-05 19:41:59.867307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:62720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.448 [2024-12-05 19:41:59.867316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.448 [2024-12-05 19:41:59.867327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:62728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.448 [2024-12-05 19:41:59.867336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.448 [2024-12-05 19:41:59.867346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:62736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.448 [2024-12-05 19:41:59.867355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.448 [2024-12-05 19:41:59.867366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:62744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.448 [2024-12-05 19:41:59.867375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.448 [2024-12-05 19:41:59.867387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:62616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.448 [2024-12-05 19:41:59.867396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.448 [2024-12-05 19:41:59.867406] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdfefd0 is same with the state(6) to be set 00:22:06.448 [2024-12-05 19:41:59.867418] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:06.448 [2024-12-05 19:41:59.867426] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:06.448 [2024-12-05 19:41:59.867438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62624 len:8 PRP1 0x0 PRP2 0x0 00:22:06.448 [2024-12-05 19:41:59.867447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.448 [2024-12-05 19:41:59.867728] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:22:06.448 [2024-12-05 19:41:59.867766] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xda0e50 (9): Bad file descriptor 00:22:06.448 [2024-12-05 19:41:59.867867] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.448 [2024-12-05 19:41:59.867890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda0e50 with addr=10.0.0.3, port=4420 00:22:06.448 [2024-12-05 19:41:59.867902] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda0e50 is same with the state(6) to be set 00:22:06.448 [2024-12-05 19:41:59.867920] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xda0e50 (9): Bad file descriptor 00:22:06.448 [2024-12-05 19:41:59.867936] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:22:06.448 [2024-12-05 19:41:59.867959] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:22:06.448 [2024-12-05 19:41:59.867971] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:22:06.448 [2024-12-05 19:41:59.867982] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:22:06.448 [2024-12-05 19:41:59.867992] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:22:06.707 19:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:22:07.645 3858.00 IOPS, 15.07 MiB/s [2024-12-05T19:42:01.082Z] [2024-12-05 19:42:00.868167] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.645 [2024-12-05 19:42:00.868451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda0e50 with addr=10.0.0.3, port=4420 00:22:07.645 [2024-12-05 19:42:00.868604] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda0e50 is same with the state(6) to be set 00:22:07.645 [2024-12-05 19:42:00.868905] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xda0e50 (9): Bad file descriptor 00:22:07.645 [2024-12-05 19:42:00.869104] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:22:07.645 [2024-12-05 19:42:00.869325] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:22:07.645 [2024-12-05 19:42:00.869344] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:22:07.645 [2024-12-05 19:42:00.869357] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:22:07.645 [2024-12-05 19:42:00.869370] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:22:08.581 2572.00 IOPS, 10.05 MiB/s [2024-12-05T19:42:02.018Z] [2024-12-05 19:42:01.869549] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.581 [2024-12-05 19:42:01.869638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda0e50 with addr=10.0.0.3, port=4420 00:22:08.582 [2024-12-05 19:42:01.869656] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda0e50 is same with the state(6) to be set 00:22:08.582 [2024-12-05 19:42:01.869684] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xda0e50 (9): Bad file descriptor 00:22:08.582 [2024-12-05 19:42:01.869705] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:22:08.582 [2024-12-05 19:42:01.869732] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:22:08.582 [2024-12-05 19:42:01.869744] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:22:08.582 [2024-12-05 19:42:01.869756] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:22:08.582 [2024-12-05 19:42:01.869767] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:22:09.517 1929.00 IOPS, 7.54 MiB/s [2024-12-05T19:42:02.954Z] [2024-12-05 19:42:02.872628] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.517 [2024-12-05 19:42:02.872961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda0e50 with addr=10.0.0.3, port=4420 00:22:09.517 [2024-12-05 19:42:02.872988] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda0e50 is same with the state(6) to be set 00:22:09.517 [2024-12-05 19:42:02.873271] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xda0e50 (9): Bad file descriptor 00:22:09.517 [2024-12-05 19:42:02.873525] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:22:09.517 [2024-12-05 19:42:02.873539] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:22:09.517 [2024-12-05 19:42:02.873551] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:22:09.517 [2024-12-05 19:42:02.873562] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:22:09.517 [2024-12-05 19:42:02.873574] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:22:09.517 19:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:09.775 [2024-12-05 19:42:03.159542] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:09.775 19:42:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@103 -- # wait 82255 00:22:10.603 1543.20 IOPS, 6.03 MiB/s [2024-12-05T19:42:04.040Z] [2024-12-05 19:42:03.902033] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 4] Resetting controller successful. 00:22:12.475 2652.50 IOPS, 10.36 MiB/s [2024-12-05T19:42:06.847Z] 3674.71 IOPS, 14.35 MiB/s [2024-12-05T19:42:07.784Z] 4401.38 IOPS, 17.19 MiB/s [2024-12-05T19:42:09.182Z] 4964.78 IOPS, 19.39 MiB/s [2024-12-05T19:42:09.182Z] 5404.30 IOPS, 21.11 MiB/s 00:22:15.745 Latency(us) 00:22:15.745 [2024-12-05T19:42:09.182Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:15.745 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:15.745 Verification LBA range: start 0x0 length 0x4000 00:22:15.745 NVMe0n1 : 10.01 5404.58 21.11 3715.34 0.00 13993.77 700.04 3019898.88 00:22:15.745 [2024-12-05T19:42:09.182Z] =================================================================================================================== 00:22:15.745 [2024-12-05T19:42:09.182Z] Total : 5404.58 21.11 3715.34 0.00 13993.77 0.00 3019898.88 00:22:15.745 { 00:22:15.745 "results": [ 00:22:15.745 { 00:22:15.745 "job": "NVMe0n1", 00:22:15.745 "core_mask": "0x4", 00:22:15.745 "workload": "verify", 00:22:15.745 "status": "finished", 00:22:15.745 "verify_range": { 00:22:15.745 "start": 0, 00:22:15.745 "length": 16384 00:22:15.745 }, 00:22:15.745 "queue_depth": 128, 00:22:15.745 "io_size": 4096, 00:22:15.745 "runtime": 10.009841, 00:22:15.745 "iops": 5404.581351491996, 00:22:15.745 "mibps": 21.11164590426561, 00:22:15.745 "io_failed": 37190, 00:22:15.745 "io_timeout": 0, 00:22:15.745 "avg_latency_us": 13993.767495237402, 00:22:15.745 "min_latency_us": 700.0436363636363, 00:22:15.745 "max_latency_us": 3019898.88 00:22:15.745 } 00:22:15.745 ], 00:22:15.745 "core_count": 1 00:22:15.745 } 00:22:15.745 19:42:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 82133 00:22:15.745 19:42:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 82133 ']' 00:22:15.745 19:42:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 82133 00:22:15.745 19:42:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:22:15.745 19:42:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:15.745 19:42:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82133 00:22:15.745 killing process with pid 82133 00:22:15.745 Received shutdown signal, test time was about 10.000000 seconds 00:22:15.745 00:22:15.745 Latency(us) 00:22:15.745 [2024-12-05T19:42:09.182Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:15.745 [2024-12-05T19:42:09.182Z] =================================================================================================================== 00:22:15.745 [2024-12-05T19:42:09.182Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:15.745 19:42:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:15.745 19:42:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:15.745 19:42:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82133' 00:22:15.745 19:42:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 82133 00:22:15.745 19:42:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 82133 00:22:15.745 19:42:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:22:15.745 19:42:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=82368 00:22:15.745 19:42:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 82368 /var/tmp/bdevperf.sock 00:22:15.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:15.746 19:42:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 82368 ']' 00:22:15.746 19:42:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:15.746 19:42:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:15.746 19:42:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:15.746 19:42:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:15.746 19:42:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:15.746 [2024-12-05 19:42:09.070431] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:22:15.746 [2024-12-05 19:42:09.070971] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82368 ] 00:22:16.004 [2024-12-05 19:42:09.219334] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:16.004 [2024-12-05 19:42:09.276995] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:16.004 [2024-12-05 19:42:09.330713] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:16.004 19:42:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:16.004 19:42:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:22:16.004 19:42:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=82371 00:22:16.004 19:42:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 82368 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:22:16.004 19:42:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:22:16.570 19:42:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:22:16.828 NVMe0n1 00:22:16.828 19:42:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=82420 00:22:16.828 19:42:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:16.828 19:42:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:22:16.828 Running I/O for 10 seconds... 00:22:17.762 19:42:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:18.021 14859.00 IOPS, 58.04 MiB/s [2024-12-05T19:42:11.458Z] [2024-12-05 19:42:11.351891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:23368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.021 [2024-12-05 19:42:11.351975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.021 [2024-12-05 19:42:11.352019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:26536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.021 [2024-12-05 19:42:11.352030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.021 [2024-12-05 19:42:11.352042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:125264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.021 [2024-12-05 19:42:11.352052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.021 [2024-12-05 19:42:11.352064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:17664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.021 [2024-12-05 19:42:11.352074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.021 [2024-12-05 19:42:11.352085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:57152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.021 [2024-12-05 19:42:11.352095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.021 [2024-12-05 19:42:11.352111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:127544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.021 [2024-12-05 19:42:11.352121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.021 [2024-12-05 19:42:11.352133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:86984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.021 [2024-12-05 19:42:11.352160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.021 [2024-12-05 19:42:11.352175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:76880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.021 [2024-12-05 19:42:11.352185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.021 [2024-12-05 19:42:11.352196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:128808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.021 [2024-12-05 19:42:11.352206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.021 [2024-12-05 19:42:11.352218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:56888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.021 [2024-12-05 19:42:11.352228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.021 [2024-12-05 19:42:11.352240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:96616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.021 [2024-12-05 19:42:11.352249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.021 [2024-12-05 19:42:11.352260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:111328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.021 [2024-12-05 19:42:11.352270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.021 [2024-12-05 19:42:11.352281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:108848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.021 [2024-12-05 19:42:11.352290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.021 [2024-12-05 19:42:11.352301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:57384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.021 [2024-12-05 19:42:11.352310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.021 [2024-12-05 19:42:11.352322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:117264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.021 [2024-12-05 19:42:11.352331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.021 [2024-12-05 19:42:11.352342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:26896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.021 [2024-12-05 19:42:11.352351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.021 [2024-12-05 19:42:11.352362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:17160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.021 [2024-12-05 19:42:11.352371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.021 [2024-12-05 19:42:11.352385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:129080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.021 [2024-12-05 19:42:11.352395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.021 [2024-12-05 19:42:11.352406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:66864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.021 [2024-12-05 19:42:11.352415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.021 [2024-12-05 19:42:11.352426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:111264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.021 [2024-12-05 19:42:11.352435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.021 [2024-12-05 19:42:11.352446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:92672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.021 [2024-12-05 19:42:11.352455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.021 [2024-12-05 19:42:11.352466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:66024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.021 [2024-12-05 19:42:11.352475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.021 [2024-12-05 19:42:11.352486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:18904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.021 [2024-12-05 19:42:11.352495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.021 [2024-12-05 19:42:11.352506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:30456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.021 [2024-12-05 19:42:11.352515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.021 [2024-12-05 19:42:11.352526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:101040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.021 [2024-12-05 19:42:11.352535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.021 [2024-12-05 19:42:11.352546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:123984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.021 [2024-12-05 19:42:11.352556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.021 [2024-12-05 19:42:11.352567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:58976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.021 [2024-12-05 19:42:11.352576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.021 [2024-12-05 19:42:11.352588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:78040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.021 [2024-12-05 19:42:11.352596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.021 [2024-12-05 19:42:11.352608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:59728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.021 [2024-12-05 19:42:11.352617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.021 [2024-12-05 19:42:11.352628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:8720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.021 [2024-12-05 19:42:11.352637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.021 [2024-12-05 19:42:11.352649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:116944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.021 [2024-12-05 19:42:11.352658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.021 [2024-12-05 19:42:11.352669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:16048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.021 [2024-12-05 19:42:11.352678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.021 [2024-12-05 19:42:11.352689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:70048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.021 [2024-12-05 19:42:11.352699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.021 [2024-12-05 19:42:11.352711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:108832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.021 [2024-12-05 19:42:11.352720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.021 [2024-12-05 19:42:11.352731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:110216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.021 [2024-12-05 19:42:11.352740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.021 [2024-12-05 19:42:11.352752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:4656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.021 [2024-12-05 19:42:11.352761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.022 [2024-12-05 19:42:11.352772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:18120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.022 [2024-12-05 19:42:11.352781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.022 [2024-12-05 19:42:11.352792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:99128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.022 [2024-12-05 19:42:11.352802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.022 [2024-12-05 19:42:11.352813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:13456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.022 [2024-12-05 19:42:11.352822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.022 [2024-12-05 19:42:11.352833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:114896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.022 [2024-12-05 19:42:11.352848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.022 [2024-12-05 19:42:11.352860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:29576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.022 [2024-12-05 19:42:11.352869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.022 [2024-12-05 19:42:11.352880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:99664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.022 [2024-12-05 19:42:11.352889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.022 [2024-12-05 19:42:11.352900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:1864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.022 [2024-12-05 19:42:11.352909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.022 [2024-12-05 19:42:11.352921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:95056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.022 [2024-12-05 19:42:11.352930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.022 [2024-12-05 19:42:11.352941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:6656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.022 [2024-12-05 19:42:11.352950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.022 [2024-12-05 19:42:11.352961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:65544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.022 [2024-12-05 19:42:11.352969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.022 [2024-12-05 19:42:11.352981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:4792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.022 [2024-12-05 19:42:11.352989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.022 [2024-12-05 19:42:11.353000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:128520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.022 [2024-12-05 19:42:11.353009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.022 [2024-12-05 19:42:11.353021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:50168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.022 [2024-12-05 19:42:11.353030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.022 [2024-12-05 19:42:11.353042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:118640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.022 [2024-12-05 19:42:11.353051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.022 [2024-12-05 19:42:11.353063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:4688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.022 [2024-12-05 19:42:11.353072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.022 [2024-12-05 19:42:11.353083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:92352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.022 [2024-12-05 19:42:11.353092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.022 [2024-12-05 19:42:11.353104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:70304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.022 [2024-12-05 19:42:11.353122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.022 [2024-12-05 19:42:11.353135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:92960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.022 [2024-12-05 19:42:11.353144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.022 [2024-12-05 19:42:11.353155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:33016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.022 [2024-12-05 19:42:11.353164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.022 [2024-12-05 19:42:11.353175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:52968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.022 [2024-12-05 19:42:11.353184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.022 [2024-12-05 19:42:11.353195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:28336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.022 [2024-12-05 19:42:11.353204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.022 [2024-12-05 19:42:11.353215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:79408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.022 [2024-12-05 19:42:11.353224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.022 [2024-12-05 19:42:11.353235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:26912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.022 [2024-12-05 19:42:11.353244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.022 [2024-12-05 19:42:11.353256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:48960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.022 [2024-12-05 19:42:11.353264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.022 [2024-12-05 19:42:11.353275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:88128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.022 [2024-12-05 19:42:11.353284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.022 [2024-12-05 19:42:11.353295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:77608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.022 [2024-12-05 19:42:11.353304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.022 [2024-12-05 19:42:11.353315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:95448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.022 [2024-12-05 19:42:11.353325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.022 [2024-12-05 19:42:11.353337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:87072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.022 [2024-12-05 19:42:11.353346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.022 [2024-12-05 19:42:11.353357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:120616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.022 [2024-12-05 19:42:11.353366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.022 [2024-12-05 19:42:11.353378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.022 [2024-12-05 19:42:11.353388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.022 [2024-12-05 19:42:11.353399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:122232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.022 [2024-12-05 19:42:11.353408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.022 [2024-12-05 19:42:11.353419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:105304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.022 [2024-12-05 19:42:11.353429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.022 [2024-12-05 19:42:11.353440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:117384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.022 [2024-12-05 19:42:11.353449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.022 [2024-12-05 19:42:11.353460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:44720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.022 [2024-12-05 19:42:11.353470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.022 [2024-12-05 19:42:11.353481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:89096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.022 [2024-12-05 19:42:11.353489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.022 [2024-12-05 19:42:11.353500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:116288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.022 [2024-12-05 19:42:11.353509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.022 [2024-12-05 19:42:11.353520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:104728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.022 [2024-12-05 19:42:11.353531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.022 [2024-12-05 19:42:11.353542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:101352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.022 [2024-12-05 19:42:11.353551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.022 [2024-12-05 19:42:11.353562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:55528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.022 [2024-12-05 19:42:11.353571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.022 [2024-12-05 19:42:11.353582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:39344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.022 [2024-12-05 19:42:11.353591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.023 [2024-12-05 19:42:11.353603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:57616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.023 [2024-12-05 19:42:11.353612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.023 [2024-12-05 19:42:11.353623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:25696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.023 [2024-12-05 19:42:11.353632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.023 [2024-12-05 19:42:11.353643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:72744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.023 [2024-12-05 19:42:11.353652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.023 [2024-12-05 19:42:11.353664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:40440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.023 [2024-12-05 19:42:11.353674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.023 [2024-12-05 19:42:11.353685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:2088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.023 [2024-12-05 19:42:11.353694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.023 [2024-12-05 19:42:11.353705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:77864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.023 [2024-12-05 19:42:11.353715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.023 [2024-12-05 19:42:11.353726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:97560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.023 [2024-12-05 19:42:11.353735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.023 [2024-12-05 19:42:11.353746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:20200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.023 [2024-12-05 19:42:11.353755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.023 [2024-12-05 19:42:11.353766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:60288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.023 [2024-12-05 19:42:11.353775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.023 [2024-12-05 19:42:11.353786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:74456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.023 [2024-12-05 19:42:11.353795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.023 [2024-12-05 19:42:11.353806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:53960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.023 [2024-12-05 19:42:11.353815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.023 [2024-12-05 19:42:11.353826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:27608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.023 [2024-12-05 19:42:11.353835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.023 [2024-12-05 19:42:11.353846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:102368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.023 [2024-12-05 19:42:11.353856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.023 [2024-12-05 19:42:11.353868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:7040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.023 [2024-12-05 19:42:11.353877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.023 [2024-12-05 19:42:11.353888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:41888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.023 [2024-12-05 19:42:11.353897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.023 [2024-12-05 19:42:11.353908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:92064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.023 [2024-12-05 19:42:11.353917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.023 [2024-12-05 19:42:11.353929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:29040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.023 [2024-12-05 19:42:11.353938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.023 [2024-12-05 19:42:11.353949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:79032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.023 [2024-12-05 19:42:11.353959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.023 [2024-12-05 19:42:11.353970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:87448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.023 [2024-12-05 19:42:11.353979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.023 [2024-12-05 19:42:11.353997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:120968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.023 [2024-12-05 19:42:11.354007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.023 [2024-12-05 19:42:11.354018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:78824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.023 [2024-12-05 19:42:11.354027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.023 [2024-12-05 19:42:11.354038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:21648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.023 [2024-12-05 19:42:11.354047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.023 [2024-12-05 19:42:11.354058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:40480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.023 [2024-12-05 19:42:11.354067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.023 [2024-12-05 19:42:11.354078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:10352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.023 [2024-12-05 19:42:11.354087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.023 [2024-12-05 19:42:11.354098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:4736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.023 [2024-12-05 19:42:11.354108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.023 [2024-12-05 19:42:11.354128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:81056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.023 [2024-12-05 19:42:11.354139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.023 [2024-12-05 19:42:11.354150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:70200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.023 [2024-12-05 19:42:11.354160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.023 [2024-12-05 19:42:11.354171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:74928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.023 [2024-12-05 19:42:11.354180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.023 [2024-12-05 19:42:11.354191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:61424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.023 [2024-12-05 19:42:11.354200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.023 [2024-12-05 19:42:11.354211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:55800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.023 [2024-12-05 19:42:11.354221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.023 [2024-12-05 19:42:11.354232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:128576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.023 [2024-12-05 19:42:11.354240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.023 [2024-12-05 19:42:11.354251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:85512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.023 [2024-12-05 19:42:11.354260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.023 [2024-12-05 19:42:11.354271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:1624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.023 [2024-12-05 19:42:11.354280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.023 [2024-12-05 19:42:11.354291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:79152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.023 [2024-12-05 19:42:11.354301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.023 [2024-12-05 19:42:11.354312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:20032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.023 [2024-12-05 19:42:11.354321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.023 [2024-12-05 19:42:11.354332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:51416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.023 [2024-12-05 19:42:11.354341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.023 [2024-12-05 19:42:11.354352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:87936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.023 [2024-12-05 19:42:11.354361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.023 [2024-12-05 19:42:11.354372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:123480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.023 [2024-12-05 19:42:11.354381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.023 [2024-12-05 19:42:11.354392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:85408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.023 [2024-12-05 19:42:11.354401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.023 [2024-12-05 19:42:11.354413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:52472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.024 [2024-12-05 19:42:11.354422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.024 [2024-12-05 19:42:11.354434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:89240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.024 [2024-12-05 19:42:11.354443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.024 [2024-12-05 19:42:11.354454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:28928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.024 [2024-12-05 19:42:11.354462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.024 [2024-12-05 19:42:11.354473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:108376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.024 [2024-12-05 19:42:11.354482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.024 [2024-12-05 19:42:11.354493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:103872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.024 [2024-12-05 19:42:11.354502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.024 [2024-12-05 19:42:11.354513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:79432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.024 [2024-12-05 19:42:11.354522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.024 [2024-12-05 19:42:11.354533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.024 [2024-12-05 19:42:11.354543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.024 [2024-12-05 19:42:11.354554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:56784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.024 [2024-12-05 19:42:11.354563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.024 [2024-12-05 19:42:11.354574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:119280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.024 [2024-12-05 19:42:11.354584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.024 [2024-12-05 19:42:11.354595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:113872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.024 [2024-12-05 19:42:11.354604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.024 [2024-12-05 19:42:11.354615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:39616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.024 [2024-12-05 19:42:11.354625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.024 [2024-12-05 19:42:11.354636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:127768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.024 [2024-12-05 19:42:11.354645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.024 [2024-12-05 19:42:11.354656] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123ee20 is same with the state(6) to be set 00:22:18.024 [2024-12-05 19:42:11.354668] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:18.024 [2024-12-05 19:42:11.354676] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:18.024 [2024-12-05 19:42:11.354684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:119792 len:8 PRP1 0x0 PRP2 0x0 00:22:18.024 [2024-12-05 19:42:11.354693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.024 [2024-12-05 19:42:11.355011] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:22:18.024 [2024-12-05 19:42:11.355099] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11d1e50 (9): Bad file descriptor 00:22:18.024 [2024-12-05 19:42:11.355225] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:18.024 [2024-12-05 19:42:11.355248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11d1e50 with addr=10.0.0.3, port=4420 00:22:18.024 [2024-12-05 19:42:11.355260] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d1e50 is same with the state(6) to be set 00:22:18.024 [2024-12-05 19:42:11.355278] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11d1e50 (9): Bad file descriptor 00:22:18.024 [2024-12-05 19:42:11.355296] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:22:18.024 [2024-12-05 19:42:11.355305] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:22:18.024 [2024-12-05 19:42:11.355316] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:22:18.024 [2024-12-05 19:42:11.355327] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:22:18.024 [2024-12-05 19:42:11.355337] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:22:18.024 19:42:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@128 -- # wait 82420 00:22:19.889 8572.50 IOPS, 33.49 MiB/s [2024-12-05T19:42:13.585Z] 5715.00 IOPS, 22.32 MiB/s [2024-12-05T19:42:13.585Z] [2024-12-05 19:42:13.355590] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:20.148 [2024-12-05 19:42:13.355827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11d1e50 with addr=10.0.0.3, port=4420 00:22:20.148 [2024-12-05 19:42:13.356065] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d1e50 is same with the state(6) to be set 00:22:20.148 [2024-12-05 19:42:13.356255] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11d1e50 (9): Bad file descriptor 00:22:20.148 [2024-12-05 19:42:13.356499] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:22:20.148 [2024-12-05 19:42:13.356641] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:22:20.148 [2024-12-05 19:42:13.356712] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:22:20.148 [2024-12-05 19:42:13.356825] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:22:20.148 [2024-12-05 19:42:13.356977] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:22:22.020 4286.25 IOPS, 16.74 MiB/s [2024-12-05T19:42:15.457Z] 3429.00 IOPS, 13.39 MiB/s [2024-12-05T19:42:15.457Z] [2024-12-05 19:42:15.357359] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.020 [2024-12-05 19:42:15.357440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11d1e50 with addr=10.0.0.3, port=4420 00:22:22.020 [2024-12-05 19:42:15.357457] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d1e50 is same with the state(6) to be set 00:22:22.020 [2024-12-05 19:42:15.357486] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11d1e50 (9): Bad file descriptor 00:22:22.020 [2024-12-05 19:42:15.357506] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:22:22.020 [2024-12-05 19:42:15.357516] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:22:22.020 [2024-12-05 19:42:15.357528] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:22:22.020 [2024-12-05 19:42:15.357539] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:22:22.020 [2024-12-05 19:42:15.357551] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:22:23.892 2857.50 IOPS, 11.16 MiB/s [2024-12-05T19:42:17.588Z] 2449.29 IOPS, 9.57 MiB/s [2024-12-05T19:42:17.588Z] [2024-12-05 19:42:17.357649] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:22:24.151 [2024-12-05 19:42:17.357720] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:22:24.151 [2024-12-05 19:42:17.357733] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:22:24.151 [2024-12-05 19:42:17.357745] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] already in failed state 00:22:24.151 [2024-12-05 19:42:17.357758] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:22:25.087 2143.12 IOPS, 8.37 MiB/s 00:22:25.087 Latency(us) 00:22:25.087 [2024-12-05T19:42:18.524Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:25.087 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:22:25.087 NVMe0n1 : 8.16 2100.72 8.21 15.68 0.00 60431.22 8102.63 7015926.69 00:22:25.087 [2024-12-05T19:42:18.524Z] =================================================================================================================== 00:22:25.087 [2024-12-05T19:42:18.524Z] Total : 2100.72 8.21 15.68 0.00 60431.22 8102.63 7015926.69 00:22:25.087 { 00:22:25.087 "results": [ 00:22:25.087 { 00:22:25.087 "job": "NVMe0n1", 00:22:25.087 "core_mask": "0x4", 00:22:25.087 "workload": "randread", 00:22:25.087 "status": "finished", 00:22:25.087 "queue_depth": 128, 00:22:25.087 "io_size": 4096, 00:22:25.087 "runtime": 8.161493, 00:22:25.087 "iops": 2100.7185817594896, 00:22:25.087 "mibps": 8.205931959998006, 00:22:25.087 "io_failed": 128, 00:22:25.087 "io_timeout": 0, 00:22:25.087 "avg_latency_us": 60431.215968589975, 00:22:25.087 "min_latency_us": 8102.632727272728, 00:22:25.087 "max_latency_us": 7015926.69090909 00:22:25.087 } 00:22:25.087 ], 00:22:25.087 "core_count": 1 00:22:25.087 } 00:22:25.087 19:42:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:25.087 Attaching 5 probes... 00:22:25.087 1412.227687: reset bdev controller NVMe0 00:22:25.087 1412.390162: reconnect bdev controller NVMe0 00:22:25.087 3412.672349: reconnect delay bdev controller NVMe0 00:22:25.087 3412.697122: reconnect bdev controller NVMe0 00:22:25.087 5414.420027: reconnect delay bdev controller NVMe0 00:22:25.087 5414.446936: reconnect bdev controller NVMe0 00:22:25.087 7414.826461: reconnect delay bdev controller NVMe0 00:22:25.087 7414.870078: reconnect bdev controller NVMe0 00:22:25.087 19:42:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:22:25.087 19:42:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:22:25.087 19:42:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@136 -- # kill 82371 00:22:25.087 19:42:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:25.087 19:42:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 82368 00:22:25.087 19:42:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 82368 ']' 00:22:25.087 19:42:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 82368 00:22:25.087 19:42:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:22:25.087 19:42:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:25.087 19:42:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82368 00:22:25.087 killing process with pid 82368 00:22:25.087 Received shutdown signal, test time was about 8.230986 seconds 00:22:25.087 00:22:25.087 Latency(us) 00:22:25.087 [2024-12-05T19:42:18.524Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:25.087 [2024-12-05T19:42:18.524Z] =================================================================================================================== 00:22:25.087 [2024-12-05T19:42:18.524Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:25.087 19:42:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:25.087 19:42:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:25.087 19:42:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82368' 00:22:25.087 19:42:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 82368 00:22:25.087 19:42:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 82368 00:22:25.346 19:42:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:25.604 19:42:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:22:25.604 19:42:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:22:25.604 19:42:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:25.604 19:42:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@121 -- # sync 00:22:25.604 19:42:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:25.604 19:42:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@124 -- # set +e 00:22:25.604 19:42:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:25.604 19:42:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:25.604 rmmod nvme_tcp 00:22:25.604 rmmod nvme_fabrics 00:22:25.604 rmmod nvme_keyring 00:22:25.604 19:42:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:25.604 19:42:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@128 -- # set -e 00:22:25.604 19:42:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@129 -- # return 0 00:22:25.604 19:42:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@517 -- # '[' -n 81945 ']' 00:22:25.604 19:42:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@518 -- # killprocess 81945 00:22:25.604 19:42:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 81945 ']' 00:22:25.604 19:42:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 81945 00:22:25.604 19:42:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:22:25.604 19:42:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:25.604 19:42:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81945 00:22:25.604 killing process with pid 81945 00:22:25.604 19:42:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:25.604 19:42:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:25.604 19:42:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81945' 00:22:25.604 19:42:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 81945 00:22:25.604 19:42:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 81945 00:22:25.863 19:42:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:25.863 19:42:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:25.863 19:42:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:25.863 19:42:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@297 -- # iptr 00:22:25.863 19:42:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-save 00:22:25.863 19:42:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:22:25.863 19:42:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:25.863 19:42:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:25.863 19:42:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:22:25.863 19:42:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:22:25.863 19:42:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:22:25.863 19:42:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:22:25.863 19:42:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:22:25.863 19:42:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:22:25.863 19:42:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:22:25.863 19:42:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:22:26.126 19:42:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:22:26.126 19:42:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:22:26.126 19:42:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:22:26.126 19:42:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:22:26.126 19:42:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:26.126 19:42:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:26.126 19:42:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@246 -- # remove_spdk_ns 00:22:26.126 19:42:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:26.126 19:42:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:26.126 19:42:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:26.126 19:42:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@300 -- # return 0 00:22:26.126 ************************************ 00:22:26.126 END TEST nvmf_timeout 00:22:26.126 ************************************ 00:22:26.126 00:22:26.126 real 0m46.007s 00:22:26.126 user 2m15.014s 00:22:26.126 sys 0m5.561s 00:22:26.126 19:42:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:26.126 19:42:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:26.126 19:42:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ virt == phy ]] 00:22:26.126 19:42:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:22:26.126 00:22:26.127 real 5m5.421s 00:22:26.127 user 13m18.099s 00:22:26.127 sys 1m9.035s 00:22:26.127 19:42:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:26.127 19:42:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:26.127 ************************************ 00:22:26.127 END TEST nvmf_host 00:22:26.127 ************************************ 00:22:26.127 19:42:19 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:22:26.127 19:42:19 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 1 -eq 0 ]] 00:22:26.127 ************************************ 00:22:26.127 END TEST nvmf_tcp 00:22:26.127 ************************************ 00:22:26.127 00:22:26.127 real 12m46.033s 00:22:26.127 user 30m46.156s 00:22:26.127 sys 3m10.935s 00:22:26.127 19:42:19 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:26.127 19:42:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:26.386 19:42:19 -- spdk/autotest.sh@285 -- # [[ 1 -eq 0 ]] 00:22:26.386 19:42:19 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:22:26.386 19:42:19 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:26.386 19:42:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:26.386 19:42:19 -- common/autotest_common.sh@10 -- # set +x 00:22:26.386 ************************************ 00:22:26.386 START TEST nvmf_dif 00:22:26.386 ************************************ 00:22:26.386 19:42:19 nvmf_dif -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:22:26.386 * Looking for test storage... 00:22:26.386 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:22:26.386 19:42:19 nvmf_dif -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:26.386 19:42:19 nvmf_dif -- common/autotest_common.sh@1711 -- # lcov --version 00:22:26.386 19:42:19 nvmf_dif -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:26.386 19:42:19 nvmf_dif -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:26.386 19:42:19 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:26.386 19:42:19 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:26.386 19:42:19 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:26.386 19:42:19 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:22:26.386 19:42:19 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:22:26.386 19:42:19 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:22:26.386 19:42:19 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:22:26.386 19:42:19 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:22:26.386 19:42:19 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:22:26.386 19:42:19 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:22:26.386 19:42:19 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:26.386 19:42:19 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:22:26.386 19:42:19 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:22:26.386 19:42:19 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:26.386 19:42:19 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:26.386 19:42:19 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:22:26.386 19:42:19 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:22:26.386 19:42:19 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:26.386 19:42:19 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:22:26.386 19:42:19 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:22:26.386 19:42:19 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:22:26.386 19:42:19 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:22:26.386 19:42:19 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:26.386 19:42:19 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:22:26.386 19:42:19 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:22:26.386 19:42:19 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:26.386 19:42:19 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:26.386 19:42:19 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:22:26.386 19:42:19 nvmf_dif -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:26.386 19:42:19 nvmf_dif -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:26.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:26.386 --rc genhtml_branch_coverage=1 00:22:26.386 --rc genhtml_function_coverage=1 00:22:26.386 --rc genhtml_legend=1 00:22:26.386 --rc geninfo_all_blocks=1 00:22:26.386 --rc geninfo_unexecuted_blocks=1 00:22:26.386 00:22:26.386 ' 00:22:26.386 19:42:19 nvmf_dif -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:26.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:26.386 --rc genhtml_branch_coverage=1 00:22:26.386 --rc genhtml_function_coverage=1 00:22:26.386 --rc genhtml_legend=1 00:22:26.386 --rc geninfo_all_blocks=1 00:22:26.386 --rc geninfo_unexecuted_blocks=1 00:22:26.386 00:22:26.386 ' 00:22:26.386 19:42:19 nvmf_dif -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:26.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:26.386 --rc genhtml_branch_coverage=1 00:22:26.386 --rc genhtml_function_coverage=1 00:22:26.386 --rc genhtml_legend=1 00:22:26.386 --rc geninfo_all_blocks=1 00:22:26.386 --rc geninfo_unexecuted_blocks=1 00:22:26.386 00:22:26.386 ' 00:22:26.386 19:42:19 nvmf_dif -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:26.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:26.386 --rc genhtml_branch_coverage=1 00:22:26.386 --rc genhtml_function_coverage=1 00:22:26.386 --rc genhtml_legend=1 00:22:26.386 --rc geninfo_all_blocks=1 00:22:26.386 --rc geninfo_unexecuted_blocks=1 00:22:26.386 00:22:26.386 ' 00:22:26.386 19:42:19 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:26.386 19:42:19 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:22:26.386 19:42:19 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:26.386 19:42:19 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:26.386 19:42:19 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:26.386 19:42:19 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:26.386 19:42:19 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:26.386 19:42:19 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:26.386 19:42:19 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:26.386 19:42:19 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:26.386 19:42:19 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:26.386 19:42:19 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:26.386 19:42:19 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:22:26.386 19:42:19 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:22:26.386 19:42:19 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:26.386 19:42:19 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:26.386 19:42:19 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:26.386 19:42:19 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:26.386 19:42:19 nvmf_dif -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:26.386 19:42:19 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:22:26.386 19:42:19 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:26.386 19:42:19 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:26.386 19:42:19 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:26.386 19:42:19 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.386 19:42:19 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.386 19:42:19 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.386 19:42:19 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:22:26.387 19:42:19 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.387 19:42:19 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:22:26.387 19:42:19 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:26.387 19:42:19 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:26.387 19:42:19 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:26.387 19:42:19 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:26.387 19:42:19 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:26.387 19:42:19 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:26.387 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:26.387 19:42:19 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:26.387 19:42:19 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:26.387 19:42:19 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:26.387 19:42:19 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:22:26.387 19:42:19 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:22:26.387 19:42:19 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:22:26.387 19:42:19 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:22:26.387 19:42:19 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:22:26.387 19:42:19 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:26.387 19:42:19 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:26.387 19:42:19 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:26.387 19:42:19 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:26.387 19:42:19 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:26.387 19:42:19 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:26.387 19:42:19 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:22:26.387 19:42:19 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:26.387 19:42:19 nvmf_dif -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:22:26.387 19:42:19 nvmf_dif -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:22:26.387 19:42:19 nvmf_dif -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:22:26.387 19:42:19 nvmf_dif -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:22:26.387 19:42:19 nvmf_dif -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:22:26.387 19:42:19 nvmf_dif -- nvmf/common.sh@460 -- # nvmf_veth_init 00:22:26.387 19:42:19 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:26.387 19:42:19 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:22:26.387 19:42:19 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:22:26.387 19:42:19 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:22:26.387 19:42:19 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:26.387 19:42:19 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:22:26.387 19:42:19 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:26.387 19:42:19 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:22:26.387 19:42:19 nvmf_dif -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:26.387 19:42:19 nvmf_dif -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:22:26.387 19:42:19 nvmf_dif -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:26.387 19:42:19 nvmf_dif -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:26.387 19:42:19 nvmf_dif -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:26.387 19:42:19 nvmf_dif -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:26.387 19:42:19 nvmf_dif -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:26.387 19:42:19 nvmf_dif -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:26.387 19:42:19 nvmf_dif -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:22:26.646 Cannot find device "nvmf_init_br" 00:22:26.646 19:42:19 nvmf_dif -- nvmf/common.sh@162 -- # true 00:22:26.646 19:42:19 nvmf_dif -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:22:26.646 Cannot find device "nvmf_init_br2" 00:22:26.646 19:42:19 nvmf_dif -- nvmf/common.sh@163 -- # true 00:22:26.646 19:42:19 nvmf_dif -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:22:26.646 Cannot find device "nvmf_tgt_br" 00:22:26.646 19:42:19 nvmf_dif -- nvmf/common.sh@164 -- # true 00:22:26.646 19:42:19 nvmf_dif -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:22:26.646 Cannot find device "nvmf_tgt_br2" 00:22:26.646 19:42:19 nvmf_dif -- nvmf/common.sh@165 -- # true 00:22:26.646 19:42:19 nvmf_dif -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:22:26.646 Cannot find device "nvmf_init_br" 00:22:26.646 19:42:19 nvmf_dif -- nvmf/common.sh@166 -- # true 00:22:26.646 19:42:19 nvmf_dif -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:22:26.646 Cannot find device "nvmf_init_br2" 00:22:26.646 19:42:19 nvmf_dif -- nvmf/common.sh@167 -- # true 00:22:26.646 19:42:19 nvmf_dif -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:22:26.646 Cannot find device "nvmf_tgt_br" 00:22:26.646 19:42:19 nvmf_dif -- nvmf/common.sh@168 -- # true 00:22:26.646 19:42:19 nvmf_dif -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:22:26.646 Cannot find device "nvmf_tgt_br2" 00:22:26.646 19:42:19 nvmf_dif -- nvmf/common.sh@169 -- # true 00:22:26.646 19:42:19 nvmf_dif -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:22:26.646 Cannot find device "nvmf_br" 00:22:26.646 19:42:19 nvmf_dif -- nvmf/common.sh@170 -- # true 00:22:26.646 19:42:19 nvmf_dif -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:22:26.646 Cannot find device "nvmf_init_if" 00:22:26.646 19:42:19 nvmf_dif -- nvmf/common.sh@171 -- # true 00:22:26.646 19:42:19 nvmf_dif -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:22:26.646 Cannot find device "nvmf_init_if2" 00:22:26.646 19:42:19 nvmf_dif -- nvmf/common.sh@172 -- # true 00:22:26.646 19:42:19 nvmf_dif -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:26.646 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:26.646 19:42:19 nvmf_dif -- nvmf/common.sh@173 -- # true 00:22:26.646 19:42:19 nvmf_dif -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:26.646 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:26.646 19:42:19 nvmf_dif -- nvmf/common.sh@174 -- # true 00:22:26.646 19:42:19 nvmf_dif -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:22:26.646 19:42:19 nvmf_dif -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:26.646 19:42:19 nvmf_dif -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:22:26.646 19:42:19 nvmf_dif -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:26.646 19:42:19 nvmf_dif -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:26.646 19:42:20 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:26.646 19:42:20 nvmf_dif -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:26.646 19:42:20 nvmf_dif -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:26.646 19:42:20 nvmf_dif -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:22:26.646 19:42:20 nvmf_dif -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:22:26.646 19:42:20 nvmf_dif -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:22:26.646 19:42:20 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:22:26.905 19:42:20 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:22:26.905 19:42:20 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:22:26.905 19:42:20 nvmf_dif -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:22:26.905 19:42:20 nvmf_dif -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:22:26.905 19:42:20 nvmf_dif -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:22:26.905 19:42:20 nvmf_dif -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:26.905 19:42:20 nvmf_dif -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:26.905 19:42:20 nvmf_dif -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:26.905 19:42:20 nvmf_dif -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:22:26.905 19:42:20 nvmf_dif -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:22:26.905 19:42:20 nvmf_dif -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:22:26.905 19:42:20 nvmf_dif -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:22:26.905 19:42:20 nvmf_dif -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:26.905 19:42:20 nvmf_dif -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:26.905 19:42:20 nvmf_dif -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:26.905 19:42:20 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:22:26.905 19:42:20 nvmf_dif -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:22:26.905 19:42:20 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:22:26.905 19:42:20 nvmf_dif -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:26.905 19:42:20 nvmf_dif -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:22:26.905 19:42:20 nvmf_dif -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:22:26.905 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:26.905 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.073 ms 00:22:26.905 00:22:26.905 --- 10.0.0.3 ping statistics --- 00:22:26.905 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:26.905 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:22:26.905 19:42:20 nvmf_dif -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:22:26.905 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:22:26.905 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:22:26.905 00:22:26.905 --- 10.0.0.4 ping statistics --- 00:22:26.905 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:26.905 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:22:26.905 19:42:20 nvmf_dif -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:26.905 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:26.905 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:22:26.905 00:22:26.905 --- 10.0.0.1 ping statistics --- 00:22:26.905 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:26.905 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:22:26.905 19:42:20 nvmf_dif -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:22:26.905 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:26.905 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:22:26.905 00:22:26.905 --- 10.0.0.2 ping statistics --- 00:22:26.905 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:26.905 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:22:26.905 19:42:20 nvmf_dif -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:26.905 19:42:20 nvmf_dif -- nvmf/common.sh@461 -- # return 0 00:22:26.905 19:42:20 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:22:26.905 19:42:20 nvmf_dif -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:22:27.163 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:27.163 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:22:27.163 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:22:27.163 19:42:20 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:27.163 19:42:20 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:27.163 19:42:20 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:27.163 19:42:20 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:27.163 19:42:20 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:27.163 19:42:20 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:27.422 19:42:20 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:22:27.422 19:42:20 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:22:27.422 19:42:20 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:27.422 19:42:20 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:27.422 19:42:20 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:22:27.422 19:42:20 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=82906 00:22:27.422 19:42:20 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:27.422 19:42:20 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 82906 00:22:27.422 19:42:20 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 82906 ']' 00:22:27.422 19:42:20 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:27.422 19:42:20 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:27.422 19:42:20 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:27.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:27.422 19:42:20 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:27.422 19:42:20 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:22:27.422 [2024-12-05 19:42:20.669722] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:22:27.422 [2024-12-05 19:42:20.669834] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:27.422 [2024-12-05 19:42:20.822306] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:27.680 [2024-12-05 19:42:20.885244] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:27.680 [2024-12-05 19:42:20.885300] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:27.680 [2024-12-05 19:42:20.885314] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:27.680 [2024-12-05 19:42:20.885324] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:27.680 [2024-12-05 19:42:20.885333] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:27.680 [2024-12-05 19:42:20.885784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:27.680 [2024-12-05 19:42:20.945554] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:27.680 19:42:21 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:27.680 19:42:21 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:22:27.680 19:42:21 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:27.680 19:42:21 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:27.680 19:42:21 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:22:27.680 19:42:21 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:27.680 19:42:21 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:22:27.680 19:42:21 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:22:27.680 19:42:21 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.680 19:42:21 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:22:27.680 [2024-12-05 19:42:21.062770] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:27.680 19:42:21 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.680 19:42:21 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:22:27.680 19:42:21 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:27.680 19:42:21 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:27.680 19:42:21 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:22:27.680 ************************************ 00:22:27.680 START TEST fio_dif_1_default 00:22:27.680 ************************************ 00:22:27.680 19:42:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:22:27.680 19:42:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:22:27.680 19:42:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:22:27.680 19:42:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:22:27.680 19:42:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:22:27.680 19:42:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:22:27.680 19:42:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:22:27.680 19:42:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.680 19:42:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:22:27.680 bdev_null0 00:22:27.680 19:42:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.680 19:42:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:22:27.680 19:42:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.680 19:42:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:22:27.680 19:42:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.680 19:42:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:22:27.680 19:42:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.680 19:42:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:22:27.680 19:42:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.680 19:42:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:22:27.680 19:42:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.680 19:42:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:22:27.680 [2024-12-05 19:42:21.106922] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:27.680 19:42:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.680 19:42:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:22:27.680 19:42:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:22:27.681 19:42:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:22:27.681 19:42:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:27.681 19:42:21 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:22:27.681 19:42:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:27.681 19:42:21 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:22:27.681 19:42:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:22:27.681 19:42:21 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:27.681 19:42:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:27.681 19:42:21 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:27.681 { 00:22:27.681 "params": { 00:22:27.681 "name": "Nvme$subsystem", 00:22:27.681 "trtype": "$TEST_TRANSPORT", 00:22:27.681 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:27.681 "adrfam": "ipv4", 00:22:27.681 "trsvcid": "$NVMF_PORT", 00:22:27.681 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:27.681 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:27.681 "hdgst": ${hdgst:-false}, 00:22:27.681 "ddgst": ${ddgst:-false} 00:22:27.681 }, 00:22:27.681 "method": "bdev_nvme_attach_controller" 00:22:27.681 } 00:22:27.681 EOF 00:22:27.681 )") 00:22:27.681 19:42:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:22:27.681 19:42:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:27.681 19:42:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:22:27.681 19:42:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:22:27.681 19:42:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:27.681 19:42:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:22:27.681 19:42:21 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:22:27.939 19:42:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:22:27.939 19:42:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:22:27.939 19:42:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:27.939 19:42:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:27.939 19:42:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:22:27.939 19:42:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:22:27.939 19:42:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:22:27.939 19:42:21 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:22:27.939 19:42:21 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:22:27.939 19:42:21 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:27.939 "params": { 00:22:27.939 "name": "Nvme0", 00:22:27.939 "trtype": "tcp", 00:22:27.939 "traddr": "10.0.0.3", 00:22:27.939 "adrfam": "ipv4", 00:22:27.939 "trsvcid": "4420", 00:22:27.940 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:27.940 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:27.940 "hdgst": false, 00:22:27.940 "ddgst": false 00:22:27.940 }, 00:22:27.940 "method": "bdev_nvme_attach_controller" 00:22:27.940 }' 00:22:27.940 19:42:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:27.940 19:42:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:27.940 19:42:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:27.940 19:42:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:27.940 19:42:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:22:27.940 19:42:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:27.940 19:42:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:27.940 19:42:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:27.940 19:42:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:22:27.940 19:42:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:27.940 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:22:27.940 fio-3.35 00:22:27.940 Starting 1 thread 00:22:40.144 00:22:40.144 filename0: (groupid=0, jobs=1): err= 0: pid=82965: Thu Dec 5 19:42:31 2024 00:22:40.144 read: IOPS=8708, BW=34.0MiB/s (35.7MB/s)(340MiB/10001msec) 00:22:40.144 slat (usec): min=6, max=917, avg= 8.61, stdev= 4.51 00:22:40.144 clat (usec): min=344, max=4084, avg=434.17, stdev=43.00 00:22:40.144 lat (usec): min=351, max=4117, avg=442.78, stdev=43.93 00:22:40.144 clat percentiles (usec): 00:22:40.144 | 1.00th=[ 363], 5.00th=[ 379], 10.00th=[ 392], 20.00th=[ 408], 00:22:40.144 | 30.00th=[ 416], 40.00th=[ 429], 50.00th=[ 433], 60.00th=[ 441], 00:22:40.144 | 70.00th=[ 449], 80.00th=[ 461], 90.00th=[ 474], 95.00th=[ 486], 00:22:40.144 | 99.00th=[ 510], 99.50th=[ 523], 99.90th=[ 553], 99.95th=[ 578], 00:22:40.144 | 99.99th=[ 1795] 00:22:40.144 bw ( KiB/s): min=33440, max=37216, per=100.00%, avg=34900.21, stdev=855.23, samples=19 00:22:40.144 iops : min= 8360, max= 9304, avg=8724.95, stdev=213.86, samples=19 00:22:40.144 lat (usec) : 500=98.01%, 750=1.97%, 1000=0.01% 00:22:40.144 lat (msec) : 2=0.01%, 4=0.01%, 10=0.01% 00:22:40.144 cpu : usr=85.45%, sys=12.71%, ctx=24, majf=0, minf=9 00:22:40.144 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:40.144 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:40.144 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:40.144 issued rwts: total=87092,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:40.144 latency : target=0, window=0, percentile=100.00%, depth=4 00:22:40.144 00:22:40.144 Run status group 0 (all jobs): 00:22:40.144 READ: bw=34.0MiB/s (35.7MB/s), 34.0MiB/s-34.0MiB/s (35.7MB/s-35.7MB/s), io=340MiB (357MB), run=10001-10001msec 00:22:40.144 19:42:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:22:40.144 19:42:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:22:40.144 19:42:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:22:40.144 19:42:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:22:40.144 19:42:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:22:40.144 19:42:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:22:40.144 19:42:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.144 19:42:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:22:40.144 19:42:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.144 19:42:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:22:40.144 19:42:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.144 19:42:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:22:40.144 19:42:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.144 00:22:40.144 real 0m11.075s 00:22:40.144 user 0m9.238s 00:22:40.144 sys 0m1.560s 00:22:40.144 19:42:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:40.144 ************************************ 00:22:40.144 19:42:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:22:40.144 END TEST fio_dif_1_default 00:22:40.144 ************************************ 00:22:40.144 19:42:32 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:22:40.144 19:42:32 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:40.144 19:42:32 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:40.144 19:42:32 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:22:40.144 ************************************ 00:22:40.144 START TEST fio_dif_1_multi_subsystems 00:22:40.144 ************************************ 00:22:40.144 19:42:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:22:40.144 19:42:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:22:40.144 19:42:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:22:40.144 19:42:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:22:40.144 19:42:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:22:40.144 19:42:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:22:40.144 19:42:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:22:40.144 19:42:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:22:40.144 19:42:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.144 19:42:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:40.144 bdev_null0 00:22:40.144 19:42:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.144 19:42:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:22:40.144 19:42:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.144 19:42:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:40.144 19:42:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.144 19:42:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:22:40.144 19:42:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.144 19:42:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:40.144 19:42:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.144 19:42:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:22:40.144 19:42:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.144 19:42:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:40.144 [2024-12-05 19:42:32.230041] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:40.144 19:42:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.144 19:42:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:22:40.145 19:42:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:22:40.145 19:42:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:22:40.145 19:42:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:22:40.145 19:42:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.145 19:42:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:40.145 bdev_null1 00:22:40.145 19:42:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.145 19:42:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:22:40.145 19:42:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.145 19:42:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:40.145 19:42:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.145 19:42:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:22:40.145 19:42:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.145 19:42:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:40.145 19:42:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.145 19:42:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:40.145 19:42:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.145 19:42:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:40.145 19:42:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.145 19:42:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:22:40.145 19:42:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:22:40.145 19:42:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:22:40.145 19:42:32 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:22:40.145 19:42:32 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:22:40.145 19:42:32 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:40.145 19:42:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:40.145 19:42:32 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:40.145 { 00:22:40.145 "params": { 00:22:40.145 "name": "Nvme$subsystem", 00:22:40.145 "trtype": "$TEST_TRANSPORT", 00:22:40.145 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:40.145 "adrfam": "ipv4", 00:22:40.145 "trsvcid": "$NVMF_PORT", 00:22:40.145 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:40.145 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:40.145 "hdgst": ${hdgst:-false}, 00:22:40.145 "ddgst": ${ddgst:-false} 00:22:40.145 }, 00:22:40.145 "method": "bdev_nvme_attach_controller" 00:22:40.145 } 00:22:40.145 EOF 00:22:40.145 )") 00:22:40.145 19:42:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:40.145 19:42:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:22:40.145 19:42:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:22:40.145 19:42:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:22:40.145 19:42:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:40.145 19:42:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:22:40.145 19:42:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:22:40.145 19:42:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:40.145 19:42:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:22:40.145 19:42:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:22:40.145 19:42:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:40.145 19:42:32 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:22:40.145 19:42:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:40.145 19:42:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:22:40.145 19:42:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:22:40.145 19:42:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:40.145 19:42:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:22:40.145 19:42:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:22:40.145 19:42:32 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:40.145 19:42:32 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:40.145 { 00:22:40.145 "params": { 00:22:40.145 "name": "Nvme$subsystem", 00:22:40.145 "trtype": "$TEST_TRANSPORT", 00:22:40.145 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:40.145 "adrfam": "ipv4", 00:22:40.145 "trsvcid": "$NVMF_PORT", 00:22:40.145 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:40.145 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:40.145 "hdgst": ${hdgst:-false}, 00:22:40.145 "ddgst": ${ddgst:-false} 00:22:40.145 }, 00:22:40.145 "method": "bdev_nvme_attach_controller" 00:22:40.145 } 00:22:40.145 EOF 00:22:40.145 )") 00:22:40.145 19:42:32 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:22:40.145 19:42:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:22:40.145 19:42:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:22:40.145 19:42:32 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:22:40.145 19:42:32 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:22:40.145 19:42:32 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:40.145 "params": { 00:22:40.145 "name": "Nvme0", 00:22:40.145 "trtype": "tcp", 00:22:40.145 "traddr": "10.0.0.3", 00:22:40.145 "adrfam": "ipv4", 00:22:40.145 "trsvcid": "4420", 00:22:40.145 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:40.145 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:40.145 "hdgst": false, 00:22:40.145 "ddgst": false 00:22:40.145 }, 00:22:40.145 "method": "bdev_nvme_attach_controller" 00:22:40.145 },{ 00:22:40.145 "params": { 00:22:40.145 "name": "Nvme1", 00:22:40.145 "trtype": "tcp", 00:22:40.145 "traddr": "10.0.0.3", 00:22:40.145 "adrfam": "ipv4", 00:22:40.145 "trsvcid": "4420", 00:22:40.145 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:40.145 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:40.145 "hdgst": false, 00:22:40.145 "ddgst": false 00:22:40.145 }, 00:22:40.145 "method": "bdev_nvme_attach_controller" 00:22:40.145 }' 00:22:40.145 19:42:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:40.145 19:42:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:40.145 19:42:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:40.145 19:42:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:40.145 19:42:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:22:40.145 19:42:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:40.145 19:42:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:40.145 19:42:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:40.145 19:42:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:22:40.145 19:42:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:40.145 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:22:40.145 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:22:40.145 fio-3.35 00:22:40.145 Starting 2 threads 00:22:50.165 00:22:50.165 filename0: (groupid=0, jobs=1): err= 0: pid=83126: Thu Dec 5 19:42:43 2024 00:22:50.165 read: IOPS=4685, BW=18.3MiB/s (19.2MB/s)(183MiB/10001msec) 00:22:50.165 slat (usec): min=5, max=812, avg=13.70, stdev= 6.64 00:22:50.165 clat (usec): min=596, max=1848, avg=815.34, stdev=49.66 00:22:50.165 lat (usec): min=604, max=1865, avg=829.04, stdev=50.64 00:22:50.165 clat percentiles (usec): 00:22:50.165 | 1.00th=[ 701], 5.00th=[ 734], 10.00th=[ 750], 20.00th=[ 775], 00:22:50.165 | 30.00th=[ 791], 40.00th=[ 807], 50.00th=[ 816], 60.00th=[ 824], 00:22:50.165 | 70.00th=[ 840], 80.00th=[ 857], 90.00th=[ 873], 95.00th=[ 898], 00:22:50.165 | 99.00th=[ 930], 99.50th=[ 947], 99.90th=[ 979], 99.95th=[ 1012], 00:22:50.165 | 99.99th=[ 1729] 00:22:50.165 bw ( KiB/s): min=18016, max=19328, per=50.06%, avg=18767.16, stdev=358.15, samples=19 00:22:50.165 iops : min= 4504, max= 4832, avg=4691.79, stdev=89.54, samples=19 00:22:50.165 lat (usec) : 750=8.79%, 1000=91.15% 00:22:50.165 lat (msec) : 2=0.06% 00:22:50.165 cpu : usr=90.41%, sys=8.16%, ctx=73, majf=0, minf=0 00:22:50.165 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:50.165 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:50.165 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:50.165 issued rwts: total=46860,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:50.165 latency : target=0, window=0, percentile=100.00%, depth=4 00:22:50.165 filename1: (groupid=0, jobs=1): err= 0: pid=83127: Thu Dec 5 19:42:43 2024 00:22:50.165 read: IOPS=4686, BW=18.3MiB/s (19.2MB/s)(183MiB/10001msec) 00:22:50.165 slat (nsec): min=6599, max=78633, avg=13950.23, stdev=5430.62 00:22:50.165 clat (usec): min=450, max=1552, avg=815.31, stdev=57.70 00:22:50.165 lat (usec): min=457, max=1579, avg=829.26, stdev=58.72 00:22:50.165 clat percentiles (usec): 00:22:50.165 | 1.00th=[ 676], 5.00th=[ 717], 10.00th=[ 742], 20.00th=[ 766], 00:22:50.165 | 30.00th=[ 791], 40.00th=[ 807], 50.00th=[ 816], 60.00th=[ 832], 00:22:50.165 | 70.00th=[ 848], 80.00th=[ 865], 90.00th=[ 889], 95.00th=[ 906], 00:22:50.165 | 99.00th=[ 947], 99.50th=[ 963], 99.90th=[ 996], 99.95th=[ 1020], 00:22:50.165 | 99.99th=[ 1287] 00:22:50.165 bw ( KiB/s): min=18048, max=19296, per=50.07%, avg=18770.53, stdev=350.43, samples=19 00:22:50.165 iops : min= 4512, max= 4824, avg=4692.63, stdev=87.61, samples=19 00:22:50.165 lat (usec) : 500=0.03%, 750=13.34%, 1000=86.55% 00:22:50.165 lat (msec) : 2=0.08% 00:22:50.165 cpu : usr=89.69%, sys=8.86%, ctx=17, majf=0, minf=9 00:22:50.165 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:50.165 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:50.165 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:50.165 issued rwts: total=46868,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:50.165 latency : target=0, window=0, percentile=100.00%, depth=4 00:22:50.165 00:22:50.165 Run status group 0 (all jobs): 00:22:50.165 READ: bw=36.6MiB/s (38.4MB/s), 18.3MiB/s-18.3MiB/s (19.2MB/s-19.2MB/s), io=366MiB (384MB), run=10001-10001msec 00:22:50.165 19:42:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:22:50.165 19:42:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:22:50.165 19:42:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:22:50.165 19:42:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:22:50.165 19:42:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:22:50.165 19:42:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:22:50.165 19:42:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.165 19:42:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:50.165 19:42:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.165 19:42:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:22:50.165 19:42:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.165 19:42:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:50.165 19:42:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.165 19:42:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:22:50.165 19:42:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:22:50.165 19:42:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:22:50.165 19:42:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:50.165 19:42:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.165 19:42:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:50.165 19:42:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.165 19:42:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:22:50.165 19:42:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.165 19:42:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:50.165 19:42:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.165 00:22:50.165 real 0m11.210s 00:22:50.165 user 0m18.824s 00:22:50.165 sys 0m2.019s 00:22:50.165 19:42:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:50.165 19:42:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:50.165 ************************************ 00:22:50.165 END TEST fio_dif_1_multi_subsystems 00:22:50.165 ************************************ 00:22:50.165 19:42:43 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:22:50.165 19:42:43 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:50.165 19:42:43 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:50.165 19:42:43 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:22:50.165 ************************************ 00:22:50.165 START TEST fio_dif_rand_params 00:22:50.165 ************************************ 00:22:50.165 19:42:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:22:50.165 19:42:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:22:50.165 19:42:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:22:50.165 19:42:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:22:50.165 19:42:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:22:50.165 19:42:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:22:50.165 19:42:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:22:50.165 19:42:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:22:50.165 19:42:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:22:50.165 19:42:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:22:50.165 19:42:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:22:50.165 19:42:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:22:50.165 19:42:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:22:50.165 19:42:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:22:50.165 19:42:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.165 19:42:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:50.165 bdev_null0 00:22:50.165 19:42:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.165 19:42:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:22:50.165 19:42:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.165 19:42:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:50.165 19:42:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.165 19:42:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:22:50.165 19:42:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.165 19:42:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:50.165 19:42:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.165 19:42:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:22:50.166 19:42:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.166 19:42:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:50.166 [2024-12-05 19:42:43.492266] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:50.166 19:42:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.166 19:42:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:22:50.166 19:42:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:22:50.166 19:42:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:22:50.166 19:42:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:50.166 19:42:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:22:50.166 19:42:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:22:50.166 19:42:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:50.166 19:42:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:22:50.166 19:42:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:22:50.166 19:42:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:22:50.166 19:42:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:22:50.166 19:42:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:50.166 19:42:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:50.166 19:42:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:50.166 { 00:22:50.166 "params": { 00:22:50.166 "name": "Nvme$subsystem", 00:22:50.166 "trtype": "$TEST_TRANSPORT", 00:22:50.166 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:50.166 "adrfam": "ipv4", 00:22:50.166 "trsvcid": "$NVMF_PORT", 00:22:50.166 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:50.166 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:50.166 "hdgst": ${hdgst:-false}, 00:22:50.166 "ddgst": ${ddgst:-false} 00:22:50.166 }, 00:22:50.166 "method": "bdev_nvme_attach_controller" 00:22:50.166 } 00:22:50.166 EOF 00:22:50.166 )") 00:22:50.166 19:42:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:22:50.166 19:42:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:50.166 19:42:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:22:50.166 19:42:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:22:50.166 19:42:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:50.166 19:42:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:22:50.166 19:42:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:22:50.166 19:42:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:22:50.166 19:42:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:50.166 19:42:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:22:50.166 19:42:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:50.166 19:42:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:22:50.166 19:42:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:22:50.166 19:42:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:50.166 "params": { 00:22:50.166 "name": "Nvme0", 00:22:50.166 "trtype": "tcp", 00:22:50.166 "traddr": "10.0.0.3", 00:22:50.166 "adrfam": "ipv4", 00:22:50.166 "trsvcid": "4420", 00:22:50.166 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:50.166 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:50.166 "hdgst": false, 00:22:50.166 "ddgst": false 00:22:50.166 }, 00:22:50.166 "method": "bdev_nvme_attach_controller" 00:22:50.166 }' 00:22:50.166 19:42:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:50.166 19:42:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:50.166 19:42:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:50.166 19:42:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:50.166 19:42:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:22:50.166 19:42:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:50.166 19:42:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:50.166 19:42:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:50.166 19:42:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:22:50.166 19:42:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:50.425 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:22:50.425 ... 00:22:50.425 fio-3.35 00:22:50.425 Starting 3 threads 00:22:57.064 00:22:57.064 filename0: (groupid=0, jobs=1): err= 0: pid=83283: Thu Dec 5 19:42:49 2024 00:22:57.064 read: IOPS=250, BW=31.3MiB/s (32.8MB/s)(157MiB/5009msec) 00:22:57.064 slat (nsec): min=7006, max=43372, avg=10516.48, stdev=4431.54 00:22:57.064 clat (usec): min=4561, max=12864, avg=11956.64, stdev=493.17 00:22:57.064 lat (usec): min=4571, max=12877, avg=11967.15, stdev=493.05 00:22:57.064 clat percentiles (usec): 00:22:57.064 | 1.00th=[11207], 5.00th=[11338], 10.00th=[11469], 20.00th=[11731], 00:22:57.064 | 30.00th=[11863], 40.00th=[11863], 50.00th=[11994], 60.00th=[12125], 00:22:57.064 | 70.00th=[12125], 80.00th=[12256], 90.00th=[12387], 95.00th=[12518], 00:22:57.064 | 99.00th=[12780], 99.50th=[12780], 99.90th=[12911], 99.95th=[12911], 00:22:57.064 | 99.99th=[12911] 00:22:57.064 bw ( KiB/s): min=30720, max=33024, per=33.37%, avg=32025.60, stdev=728.59, samples=10 00:22:57.064 iops : min= 240, max= 258, avg=250.20, stdev= 5.69, samples=10 00:22:57.064 lat (msec) : 10=0.24%, 20=99.76% 00:22:57.064 cpu : usr=90.93%, sys=8.53%, ctx=9, majf=0, minf=0 00:22:57.064 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:57.064 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:57.064 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:57.064 issued rwts: total=1254,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:57.064 latency : target=0, window=0, percentile=100.00%, depth=3 00:22:57.064 filename0: (groupid=0, jobs=1): err= 0: pid=83284: Thu Dec 5 19:42:49 2024 00:22:57.064 read: IOPS=249, BW=31.2MiB/s (32.8MB/s)(156MiB/5005msec) 00:22:57.064 slat (usec): min=5, max=118, avg=15.51, stdev= 5.82 00:22:57.064 clat (usec): min=10387, max=13833, avg=11966.31, stdev=355.27 00:22:57.064 lat (usec): min=10401, max=13849, avg=11981.81, stdev=355.57 00:22:57.064 clat percentiles (usec): 00:22:57.064 | 1.00th=[11207], 5.00th=[11338], 10.00th=[11469], 20.00th=[11731], 00:22:57.064 | 30.00th=[11863], 40.00th=[11863], 50.00th=[11994], 60.00th=[12125], 00:22:57.064 | 70.00th=[12125], 80.00th=[12256], 90.00th=[12387], 95.00th=[12518], 00:22:57.064 | 99.00th=[12780], 99.50th=[12911], 99.90th=[13829], 99.95th=[13829], 00:22:57.064 | 99.99th=[13829] 00:22:57.064 bw ( KiB/s): min=31488, max=33024, per=33.29%, avg=31948.80, stdev=536.99, samples=10 00:22:57.064 iops : min= 246, max= 258, avg=249.60, stdev= 4.20, samples=10 00:22:57.064 lat (msec) : 20=100.00% 00:22:57.064 cpu : usr=91.51%, sys=7.67%, ctx=18, majf=0, minf=0 00:22:57.064 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:57.064 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:57.064 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:57.064 issued rwts: total=1251,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:57.064 latency : target=0, window=0, percentile=100.00%, depth=3 00:22:57.064 filename0: (groupid=0, jobs=1): err= 0: pid=83285: Thu Dec 5 19:42:49 2024 00:22:57.064 read: IOPS=249, BW=31.2MiB/s (32.8MB/s)(156MiB/5005msec) 00:22:57.064 slat (nsec): min=5497, max=49449, avg=14536.51, stdev=4256.48 00:22:57.064 clat (usec): min=10372, max=13908, avg=11969.88, stdev=355.97 00:22:57.064 lat (usec): min=10385, max=13936, avg=11984.42, stdev=356.27 00:22:57.064 clat percentiles (usec): 00:22:57.064 | 1.00th=[11207], 5.00th=[11338], 10.00th=[11469], 20.00th=[11731], 00:22:57.064 | 30.00th=[11863], 40.00th=[11863], 50.00th=[11994], 60.00th=[12125], 00:22:57.064 | 70.00th=[12125], 80.00th=[12256], 90.00th=[12387], 95.00th=[12518], 00:22:57.064 | 99.00th=[12780], 99.50th=[12911], 99.90th=[13960], 99.95th=[13960], 00:22:57.064 | 99.99th=[13960] 00:22:57.064 bw ( KiB/s): min=31488, max=33024, per=33.29%, avg=31948.80, stdev=536.99, samples=10 00:22:57.064 iops : min= 246, max= 258, avg=249.60, stdev= 4.20, samples=10 00:22:57.064 lat (msec) : 20=100.00% 00:22:57.064 cpu : usr=91.87%, sys=7.63%, ctx=18, majf=0, minf=0 00:22:57.064 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:57.064 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:57.064 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:57.064 issued rwts: total=1251,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:57.064 latency : target=0, window=0, percentile=100.00%, depth=3 00:22:57.064 00:22:57.064 Run status group 0 (all jobs): 00:22:57.064 READ: bw=93.7MiB/s (98.3MB/s), 31.2MiB/s-31.3MiB/s (32.8MB/s-32.8MB/s), io=470MiB (492MB), run=5005-5009msec 00:22:57.064 19:42:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:22:57.064 19:42:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:22:57.064 19:42:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:22:57.064 19:42:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:22:57.064 19:42:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:22:57.064 19:42:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:22:57.064 19:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.064 19:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:57.064 19:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.064 19:42:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:22:57.064 19:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.064 19:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:57.064 19:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.064 19:42:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:22:57.064 19:42:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:22:57.064 19:42:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:22:57.064 19:42:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:22:57.064 19:42:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:22:57.064 19:42:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:22:57.064 19:42:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:22:57.064 19:42:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:22:57.064 19:42:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:22:57.064 19:42:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:22:57.064 19:42:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:22:57.064 19:42:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:22:57.064 19:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.064 19:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:57.064 bdev_null0 00:22:57.064 19:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.064 19:42:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:22:57.064 19:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.064 19:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:57.064 19:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.064 19:42:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:22:57.064 19:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.064 19:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:57.064 19:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.064 19:42:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:22:57.064 19:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.064 19:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:57.064 [2024-12-05 19:42:49.606969] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:57.064 19:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.064 19:42:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:22:57.064 19:42:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:22:57.064 19:42:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:22:57.064 19:42:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:22:57.064 19:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.064 19:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:57.064 bdev_null1 00:22:57.064 19:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.064 19:42:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:22:57.064 19:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.064 19:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:57.064 19:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.064 19:42:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:22:57.064 19:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.064 19:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:57.064 19:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.064 19:42:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:57.064 19:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.064 19:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:57.064 19:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.064 19:42:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:22:57.064 19:42:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:22:57.064 19:42:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:22:57.064 19:42:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:22:57.064 19:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.064 19:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:57.064 bdev_null2 00:22:57.064 19:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.064 19:42:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:22:57.064 19:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.064 19:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:57.064 19:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.064 19:42:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:22:57.064 19:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.064 19:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:57.064 19:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.064 19:42:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:22:57.064 19:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.064 19:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:57.064 19:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.064 19:42:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:22:57.064 19:42:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:22:57.064 19:42:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:22:57.064 19:42:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:22:57.064 19:42:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:57.064 19:42:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:22:57.064 19:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:57.064 19:42:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:22:57.064 19:42:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:57.064 19:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:22:57.064 19:42:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:22:57.064 19:42:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:57.064 { 00:22:57.064 "params": { 00:22:57.064 "name": "Nvme$subsystem", 00:22:57.064 "trtype": "$TEST_TRANSPORT", 00:22:57.064 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:57.064 "adrfam": "ipv4", 00:22:57.064 "trsvcid": "$NVMF_PORT", 00:22:57.064 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:57.064 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:57.064 "hdgst": ${hdgst:-false}, 00:22:57.065 "ddgst": ${ddgst:-false} 00:22:57.065 }, 00:22:57.065 "method": "bdev_nvme_attach_controller" 00:22:57.065 } 00:22:57.065 EOF 00:22:57.065 )") 00:22:57.065 19:42:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:22:57.065 19:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:57.065 19:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:22:57.065 19:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:57.065 19:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:22:57.065 19:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:22:57.065 19:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:57.065 19:42:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:22:57.065 19:42:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:22:57.065 19:42:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:22:57.065 19:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:57.065 19:42:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:22:57.065 19:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:22:57.065 19:42:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:57.065 19:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:57.065 19:42:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:57.065 { 00:22:57.065 "params": { 00:22:57.065 "name": "Nvme$subsystem", 00:22:57.065 "trtype": "$TEST_TRANSPORT", 00:22:57.065 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:57.065 "adrfam": "ipv4", 00:22:57.065 "trsvcid": "$NVMF_PORT", 00:22:57.065 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:57.065 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:57.065 "hdgst": ${hdgst:-false}, 00:22:57.065 "ddgst": ${ddgst:-false} 00:22:57.065 }, 00:22:57.065 "method": "bdev_nvme_attach_controller" 00:22:57.065 } 00:22:57.065 EOF 00:22:57.065 )") 00:22:57.065 19:42:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:22:57.065 19:42:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:22:57.065 19:42:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:22:57.065 19:42:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:22:57.065 19:42:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:22:57.065 19:42:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:22:57.065 19:42:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:57.065 19:42:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:57.065 { 00:22:57.065 "params": { 00:22:57.065 "name": "Nvme$subsystem", 00:22:57.065 "trtype": "$TEST_TRANSPORT", 00:22:57.065 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:57.065 "adrfam": "ipv4", 00:22:57.065 "trsvcid": "$NVMF_PORT", 00:22:57.065 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:57.065 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:57.065 "hdgst": ${hdgst:-false}, 00:22:57.065 "ddgst": ${ddgst:-false} 00:22:57.065 }, 00:22:57.065 "method": "bdev_nvme_attach_controller" 00:22:57.065 } 00:22:57.065 EOF 00:22:57.065 )") 00:22:57.065 19:42:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:22:57.065 19:42:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:22:57.065 19:42:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:22:57.065 19:42:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:57.065 "params": { 00:22:57.065 "name": "Nvme0", 00:22:57.065 "trtype": "tcp", 00:22:57.065 "traddr": "10.0.0.3", 00:22:57.065 "adrfam": "ipv4", 00:22:57.065 "trsvcid": "4420", 00:22:57.065 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:57.065 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:57.065 "hdgst": false, 00:22:57.065 "ddgst": false 00:22:57.065 }, 00:22:57.065 "method": "bdev_nvme_attach_controller" 00:22:57.065 },{ 00:22:57.065 "params": { 00:22:57.065 "name": "Nvme1", 00:22:57.065 "trtype": "tcp", 00:22:57.065 "traddr": "10.0.0.3", 00:22:57.065 "adrfam": "ipv4", 00:22:57.065 "trsvcid": "4420", 00:22:57.065 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:57.065 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:57.065 "hdgst": false, 00:22:57.065 "ddgst": false 00:22:57.065 }, 00:22:57.065 "method": "bdev_nvme_attach_controller" 00:22:57.065 },{ 00:22:57.065 "params": { 00:22:57.065 "name": "Nvme2", 00:22:57.065 "trtype": "tcp", 00:22:57.065 "traddr": "10.0.0.3", 00:22:57.065 "adrfam": "ipv4", 00:22:57.065 "trsvcid": "4420", 00:22:57.065 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:57.065 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:57.065 "hdgst": false, 00:22:57.065 "ddgst": false 00:22:57.065 }, 00:22:57.065 "method": "bdev_nvme_attach_controller" 00:22:57.065 }' 00:22:57.065 19:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:57.065 19:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:57.065 19:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:57.065 19:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:57.065 19:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:22:57.065 19:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:57.065 19:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:57.065 19:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:57.065 19:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:22:57.065 19:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:57.065 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:22:57.065 ... 00:22:57.065 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:22:57.065 ... 00:22:57.065 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:22:57.065 ... 00:22:57.065 fio-3.35 00:22:57.065 Starting 24 threads 00:23:09.264 00:23:09.264 filename0: (groupid=0, jobs=1): err= 0: pid=83381: Thu Dec 5 19:43:00 2024 00:23:09.264 read: IOPS=237, BW=952KiB/s (975kB/s)(9548KiB/10032msec) 00:23:09.264 slat (usec): min=5, max=9036, avg=17.93, stdev=184.71 00:23:09.264 clat (msec): min=6, max=157, avg=67.10, stdev=22.59 00:23:09.264 lat (msec): min=6, max=157, avg=67.12, stdev=22.59 00:23:09.264 clat percentiles (msec): 00:23:09.264 | 1.00th=[ 13], 5.00th=[ 27], 10.00th=[ 41], 20.00th=[ 48], 00:23:09.264 | 30.00th=[ 54], 40.00th=[ 64], 50.00th=[ 70], 60.00th=[ 72], 00:23:09.264 | 70.00th=[ 77], 80.00th=[ 82], 90.00th=[ 103], 95.00th=[ 109], 00:23:09.264 | 99.00th=[ 120], 99.50th=[ 121], 99.90th=[ 131], 99.95th=[ 140], 00:23:09.264 | 99.99th=[ 157] 00:23:09.264 bw ( KiB/s): min= 656, max= 1704, per=4.28%, avg=948.25, stdev=239.95, samples=20 00:23:09.264 iops : min= 164, max= 426, avg=237.05, stdev=59.98, samples=20 00:23:09.264 lat (msec) : 10=0.67%, 20=1.34%, 50=24.55%, 100=63.18%, 250=10.26% 00:23:09.264 cpu : usr=39.80%, sys=2.45%, ctx=1255, majf=0, minf=9 00:23:09.264 IO depths : 1=0.1%, 2=0.4%, 4=1.5%, 8=81.9%, 16=16.2%, 32=0.0%, >=64=0.0% 00:23:09.264 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:09.264 complete : 0=0.0%, 4=87.6%, 8=12.1%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:09.264 issued rwts: total=2387,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:09.264 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:09.264 filename0: (groupid=0, jobs=1): err= 0: pid=83382: Thu Dec 5 19:43:00 2024 00:23:09.264 read: IOPS=211, BW=846KiB/s (866kB/s)(8472KiB/10016msec) 00:23:09.264 slat (usec): min=8, max=9024, avg=26.82, stdev=314.40 00:23:09.264 clat (msec): min=20, max=156, avg=75.43, stdev=24.92 00:23:09.264 lat (msec): min=20, max=156, avg=75.46, stdev=24.93 00:23:09.264 clat percentiles (msec): 00:23:09.264 | 1.00th=[ 24], 5.00th=[ 36], 10.00th=[ 47], 20.00th=[ 56], 00:23:09.264 | 30.00th=[ 66], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 75], 00:23:09.264 | 70.00th=[ 81], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 121], 00:23:09.264 | 99.00th=[ 144], 99.50th=[ 146], 99.90th=[ 157], 99.95th=[ 157], 00:23:09.264 | 99.99th=[ 157] 00:23:09.264 bw ( KiB/s): min= 512, max= 1256, per=3.81%, avg=843.40, stdev=216.24, samples=20 00:23:09.264 iops : min= 128, max= 314, avg=210.80, stdev=54.04, samples=20 00:23:09.264 lat (msec) : 50=15.34%, 100=66.29%, 250=18.37% 00:23:09.264 cpu : usr=39.39%, sys=2.12%, ctx=1201, majf=0, minf=9 00:23:09.264 IO depths : 1=0.1%, 2=3.4%, 4=13.9%, 8=68.2%, 16=14.4%, 32=0.0%, >=64=0.0% 00:23:09.264 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:09.264 complete : 0=0.0%, 4=91.3%, 8=5.6%, 16=3.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:09.264 issued rwts: total=2118,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:09.264 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:09.264 filename0: (groupid=0, jobs=1): err= 0: pid=83383: Thu Dec 5 19:43:00 2024 00:23:09.264 read: IOPS=246, BW=984KiB/s (1008kB/s)(9844KiB/10002msec) 00:23:09.264 slat (usec): min=8, max=10025, avg=20.98, stdev=258.52 00:23:09.264 clat (msec): min=3, max=131, avg=64.93, stdev=22.54 00:23:09.264 lat (msec): min=3, max=131, avg=64.95, stdev=22.55 00:23:09.264 clat percentiles (msec): 00:23:09.264 | 1.00th=[ 6], 5.00th=[ 29], 10.00th=[ 40], 20.00th=[ 48], 00:23:09.264 | 30.00th=[ 48], 40.00th=[ 61], 50.00th=[ 70], 60.00th=[ 72], 00:23:09.264 | 70.00th=[ 72], 80.00th=[ 83], 90.00th=[ 96], 95.00th=[ 108], 00:23:09.264 | 99.00th=[ 121], 99.50th=[ 121], 99.90th=[ 121], 99.95th=[ 121], 00:23:09.264 | 99.99th=[ 132] 00:23:09.264 bw ( KiB/s): min= 664, max= 1080, per=4.21%, avg=933.05, stdev=153.49, samples=19 00:23:09.264 iops : min= 166, max= 270, avg=233.26, stdev=38.37, samples=19 00:23:09.264 lat (msec) : 4=0.08%, 10=1.30%, 20=1.71%, 50=30.27%, 100=58.19% 00:23:09.264 lat (msec) : 250=8.45% 00:23:09.264 cpu : usr=31.37%, sys=2.01%, ctx=871, majf=0, minf=9 00:23:09.264 IO depths : 1=0.1%, 2=0.2%, 4=1.0%, 8=83.1%, 16=15.6%, 32=0.0%, >=64=0.0% 00:23:09.264 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:09.264 complete : 0=0.0%, 4=87.0%, 8=12.8%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:09.264 issued rwts: total=2461,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:09.264 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:09.264 filename0: (groupid=0, jobs=1): err= 0: pid=83384: Thu Dec 5 19:43:00 2024 00:23:09.264 read: IOPS=232, BW=929KiB/s (951kB/s)(9300KiB/10015msec) 00:23:09.264 slat (usec): min=3, max=8025, avg=18.19, stdev=166.21 00:23:09.264 clat (msec): min=14, max=131, avg=68.83, stdev=20.85 00:23:09.264 lat (msec): min=14, max=131, avg=68.85, stdev=20.85 00:23:09.264 clat percentiles (msec): 00:23:09.264 | 1.00th=[ 24], 5.00th=[ 36], 10.00th=[ 47], 20.00th=[ 48], 00:23:09.264 | 30.00th=[ 58], 40.00th=[ 64], 50.00th=[ 72], 60.00th=[ 72], 00:23:09.264 | 70.00th=[ 74], 80.00th=[ 85], 90.00th=[ 103], 95.00th=[ 109], 00:23:09.264 | 99.00th=[ 117], 99.50th=[ 121], 99.90th=[ 123], 99.95th=[ 131], 00:23:09.264 | 99.99th=[ 132] 00:23:09.264 bw ( KiB/s): min= 664, max= 1367, per=4.17%, avg=924.35, stdev=176.30, samples=20 00:23:09.264 iops : min= 166, max= 341, avg=231.05, stdev=43.98, samples=20 00:23:09.264 lat (msec) : 20=0.13%, 50=25.33%, 100=63.66%, 250=10.88% 00:23:09.264 cpu : usr=33.87%, sys=1.96%, ctx=1191, majf=0, minf=9 00:23:09.264 IO depths : 1=0.2%, 2=0.4%, 4=1.2%, 8=82.2%, 16=16.1%, 32=0.0%, >=64=0.0% 00:23:09.264 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:09.264 complete : 0=0.0%, 4=87.6%, 8=12.2%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:09.264 issued rwts: total=2325,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:09.264 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:09.264 filename0: (groupid=0, jobs=1): err= 0: pid=83385: Thu Dec 5 19:43:00 2024 00:23:09.264 read: IOPS=241, BW=966KiB/s (989kB/s)(9676KiB/10015msec) 00:23:09.264 slat (usec): min=4, max=8034, avg=31.57, stdev=364.01 00:23:09.264 clat (msec): min=15, max=119, avg=66.05, stdev=20.66 00:23:09.264 lat (msec): min=15, max=119, avg=66.08, stdev=20.64 00:23:09.264 clat percentiles (msec): 00:23:09.264 | 1.00th=[ 24], 5.00th=[ 36], 10.00th=[ 45], 20.00th=[ 48], 00:23:09.264 | 30.00th=[ 50], 40.00th=[ 61], 50.00th=[ 69], 60.00th=[ 72], 00:23:09.264 | 70.00th=[ 72], 80.00th=[ 82], 90.00th=[ 97], 95.00th=[ 108], 00:23:09.264 | 99.00th=[ 118], 99.50th=[ 121], 99.90th=[ 121], 99.95th=[ 121], 00:23:09.264 | 99.99th=[ 121] 00:23:09.264 bw ( KiB/s): min= 688, max= 1192, per=4.24%, avg=940.16, stdev=159.45, samples=19 00:23:09.264 iops : min= 172, max= 298, avg=235.00, stdev=39.83, samples=19 00:23:09.264 lat (msec) : 20=0.50%, 50=30.84%, 100=59.57%, 250=9.09% 00:23:09.264 cpu : usr=33.54%, sys=1.93%, ctx=913, majf=0, minf=9 00:23:09.264 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=83.5%, 16=15.9%, 32=0.0%, >=64=0.0% 00:23:09.264 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:09.264 complete : 0=0.0%, 4=86.9%, 8=13.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:09.264 issued rwts: total=2419,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:09.264 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:09.264 filename0: (groupid=0, jobs=1): err= 0: pid=83386: Thu Dec 5 19:43:00 2024 00:23:09.264 read: IOPS=239, BW=959KiB/s (982kB/s)(9588KiB/10003msec) 00:23:09.264 slat (usec): min=4, max=8027, avg=25.50, stdev=258.69 00:23:09.264 clat (msec): min=2, max=128, avg=66.65, stdev=22.50 00:23:09.264 lat (msec): min=2, max=128, avg=66.67, stdev=22.50 00:23:09.264 clat percentiles (msec): 00:23:09.264 | 1.00th=[ 7], 5.00th=[ 36], 10.00th=[ 44], 20.00th=[ 48], 00:23:09.264 | 30.00th=[ 52], 40.00th=[ 62], 50.00th=[ 71], 60.00th=[ 72], 00:23:09.264 | 70.00th=[ 74], 80.00th=[ 83], 90.00th=[ 100], 95.00th=[ 108], 00:23:09.264 | 99.00th=[ 120], 99.50th=[ 120], 99.90th=[ 121], 99.95th=[ 121], 00:23:09.264 | 99.99th=[ 129] 00:23:09.264 bw ( KiB/s): min= 688, max= 1056, per=4.09%, avg=905.89, stdev=136.29, samples=19 00:23:09.264 iops : min= 172, max= 264, avg=226.47, stdev=34.07, samples=19 00:23:09.264 lat (msec) : 4=0.25%, 10=1.84%, 20=1.50%, 50=25.49%, 100=61.12% 00:23:09.264 lat (msec) : 250=9.80% 00:23:09.264 cpu : usr=35.82%, sys=2.23%, ctx=1047, majf=0, minf=9 00:23:09.264 IO depths : 1=0.1%, 2=0.6%, 4=2.5%, 8=81.2%, 16=15.6%, 32=0.0%, >=64=0.0% 00:23:09.264 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:09.264 complete : 0=0.0%, 4=87.6%, 8=11.9%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:09.264 issued rwts: total=2397,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:09.264 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:09.264 filename0: (groupid=0, jobs=1): err= 0: pid=83387: Thu Dec 5 19:43:00 2024 00:23:09.264 read: IOPS=221, BW=886KiB/s (907kB/s)(8884KiB/10027msec) 00:23:09.264 slat (usec): min=5, max=4026, avg=16.02, stdev=85.29 00:23:09.264 clat (msec): min=6, max=154, avg=72.06, stdev=27.47 00:23:09.264 lat (msec): min=6, max=154, avg=72.08, stdev=27.47 00:23:09.264 clat percentiles (msec): 00:23:09.264 | 1.00th=[ 8], 5.00th=[ 24], 10.00th=[ 40], 20.00th=[ 51], 00:23:09.264 | 30.00th=[ 61], 40.00th=[ 68], 50.00th=[ 71], 60.00th=[ 75], 00:23:09.264 | 70.00th=[ 81], 80.00th=[ 94], 90.00th=[ 111], 95.00th=[ 117], 00:23:09.265 | 99.00th=[ 140], 99.50th=[ 148], 99.90th=[ 150], 99.95th=[ 155], 00:23:09.265 | 99.99th=[ 155] 00:23:09.265 bw ( KiB/s): min= 512, max= 1664, per=3.99%, avg=884.25, stdev=282.49, samples=20 00:23:09.265 iops : min= 128, max= 416, avg=221.05, stdev=70.61, samples=20 00:23:09.265 lat (msec) : 10=2.16%, 20=2.07%, 50=16.03%, 100=62.40%, 250=17.33% 00:23:09.265 cpu : usr=38.54%, sys=2.44%, ctx=1366, majf=0, minf=9 00:23:09.265 IO depths : 1=0.1%, 2=2.5%, 4=10.0%, 8=72.4%, 16=15.0%, 32=0.0%, >=64=0.0% 00:23:09.265 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:09.265 complete : 0=0.0%, 4=90.1%, 8=7.7%, 16=2.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:09.265 issued rwts: total=2221,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:09.265 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:09.265 filename0: (groupid=0, jobs=1): err= 0: pid=83388: Thu Dec 5 19:43:00 2024 00:23:09.265 read: IOPS=242, BW=968KiB/s (992kB/s)(9688KiB/10005msec) 00:23:09.265 slat (usec): min=5, max=8032, avg=37.45, stdev=412.14 00:23:09.265 clat (msec): min=4, max=121, avg=65.86, stdev=21.65 00:23:09.265 lat (msec): min=4, max=121, avg=65.90, stdev=21.64 00:23:09.265 clat percentiles (msec): 00:23:09.265 | 1.00th=[ 10], 5.00th=[ 35], 10.00th=[ 42], 20.00th=[ 48], 00:23:09.265 | 30.00th=[ 52], 40.00th=[ 61], 50.00th=[ 68], 60.00th=[ 72], 00:23:09.265 | 70.00th=[ 73], 80.00th=[ 82], 90.00th=[ 99], 95.00th=[ 108], 00:23:09.265 | 99.00th=[ 115], 99.50th=[ 117], 99.90th=[ 123], 99.95th=[ 123], 00:23:09.265 | 99.99th=[ 123] 00:23:09.265 bw ( KiB/s): min= 712, max= 1096, per=4.17%, avg=924.16, stdev=143.14, samples=19 00:23:09.265 iops : min= 178, max= 274, avg=231.00, stdev=35.78, samples=19 00:23:09.265 lat (msec) : 10=1.03%, 20=1.16%, 50=26.01%, 100=62.43%, 250=9.37% 00:23:09.265 cpu : usr=36.97%, sys=1.99%, ctx=1145, majf=0, minf=9 00:23:09.265 IO depths : 1=0.1%, 2=0.2%, 4=0.9%, 8=83.0%, 16=15.7%, 32=0.0%, >=64=0.0% 00:23:09.265 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:09.265 complete : 0=0.0%, 4=87.0%, 8=12.8%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:09.265 issued rwts: total=2422,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:09.265 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:09.265 filename1: (groupid=0, jobs=1): err= 0: pid=83389: Thu Dec 5 19:43:00 2024 00:23:09.265 read: IOPS=239, BW=958KiB/s (981kB/s)(9584KiB/10008msec) 00:23:09.265 slat (usec): min=5, max=8028, avg=22.98, stdev=245.53 00:23:09.265 clat (msec): min=6, max=128, avg=66.71, stdev=22.01 00:23:09.265 lat (msec): min=6, max=128, avg=66.73, stdev=22.00 00:23:09.265 clat percentiles (msec): 00:23:09.265 | 1.00th=[ 12], 5.00th=[ 35], 10.00th=[ 44], 20.00th=[ 48], 00:23:09.265 | 30.00th=[ 52], 40.00th=[ 61], 50.00th=[ 71], 60.00th=[ 72], 00:23:09.265 | 70.00th=[ 74], 80.00th=[ 84], 90.00th=[ 99], 95.00th=[ 108], 00:23:09.265 | 99.00th=[ 118], 99.50th=[ 120], 99.90th=[ 127], 99.95th=[ 127], 00:23:09.265 | 99.99th=[ 129] 00:23:09.265 bw ( KiB/s): min= 664, max= 1128, per=4.12%, avg=912.42, stdev=154.54, samples=19 00:23:09.265 iops : min= 166, max= 282, avg=228.05, stdev=38.62, samples=19 00:23:09.265 lat (msec) : 10=0.75%, 20=1.71%, 50=26.00%, 100=61.85%, 250=9.68% 00:23:09.265 cpu : usr=35.33%, sys=1.93%, ctx=1012, majf=0, minf=9 00:23:09.265 IO depths : 1=0.1%, 2=0.3%, 4=1.0%, 8=83.0%, 16=15.8%, 32=0.0%, >=64=0.0% 00:23:09.265 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:09.265 complete : 0=0.0%, 4=87.0%, 8=12.8%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:09.265 issued rwts: total=2396,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:09.265 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:09.265 filename1: (groupid=0, jobs=1): err= 0: pid=83390: Thu Dec 5 19:43:00 2024 00:23:09.265 read: IOPS=233, BW=935KiB/s (957kB/s)(9372KiB/10025msec) 00:23:09.265 slat (usec): min=3, max=8023, avg=17.54, stdev=169.49 00:23:09.265 clat (msec): min=6, max=143, avg=68.35, stdev=23.44 00:23:09.265 lat (msec): min=6, max=143, avg=68.37, stdev=23.43 00:23:09.265 clat percentiles (msec): 00:23:09.265 | 1.00th=[ 10], 5.00th=[ 26], 10.00th=[ 41], 20.00th=[ 48], 00:23:09.265 | 30.00th=[ 60], 40.00th=[ 64], 50.00th=[ 72], 60.00th=[ 72], 00:23:09.265 | 70.00th=[ 74], 80.00th=[ 84], 90.00th=[ 106], 95.00th=[ 109], 00:23:09.265 | 99.00th=[ 120], 99.50th=[ 121], 99.90th=[ 132], 99.95th=[ 144], 00:23:09.265 | 99.99th=[ 144] 00:23:09.265 bw ( KiB/s): min= 632, max= 1820, per=4.20%, avg=930.50, stdev=261.12, samples=20 00:23:09.265 iops : min= 158, max= 455, avg=232.60, stdev=65.27, samples=20 00:23:09.265 lat (msec) : 10=1.92%, 20=2.09%, 50=20.57%, 100=64.96%, 250=10.46% 00:23:09.265 cpu : usr=31.65%, sys=1.84%, ctx=868, majf=0, minf=9 00:23:09.265 IO depths : 1=0.1%, 2=0.4%, 4=1.5%, 8=81.5%, 16=16.6%, 32=0.0%, >=64=0.0% 00:23:09.265 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:09.265 complete : 0=0.0%, 4=88.0%, 8=11.7%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:09.265 issued rwts: total=2343,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:09.265 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:09.265 filename1: (groupid=0, jobs=1): err= 0: pid=83391: Thu Dec 5 19:43:00 2024 00:23:09.265 read: IOPS=242, BW=972KiB/s (995kB/s)(9728KiB/10010msec) 00:23:09.265 slat (usec): min=4, max=9024, avg=22.13, stdev=215.86 00:23:09.265 clat (msec): min=9, max=123, avg=65.71, stdev=21.33 00:23:09.265 lat (msec): min=9, max=123, avg=65.74, stdev=21.34 00:23:09.265 clat percentiles (msec): 00:23:09.265 | 1.00th=[ 17], 5.00th=[ 36], 10.00th=[ 42], 20.00th=[ 48], 00:23:09.265 | 30.00th=[ 53], 40.00th=[ 61], 50.00th=[ 68], 60.00th=[ 72], 00:23:09.265 | 70.00th=[ 74], 80.00th=[ 81], 90.00th=[ 99], 95.00th=[ 107], 00:23:09.265 | 99.00th=[ 116], 99.50th=[ 118], 99.90th=[ 124], 99.95th=[ 124], 00:23:09.265 | 99.99th=[ 124] 00:23:09.265 bw ( KiB/s): min= 712, max= 1128, per=4.22%, avg=935.11, stdev=147.49, samples=19 00:23:09.265 iops : min= 178, max= 282, avg=233.74, stdev=36.84, samples=19 00:23:09.265 lat (msec) : 10=0.12%, 20=1.60%, 50=26.07%, 100=62.79%, 250=9.42% 00:23:09.265 cpu : usr=42.01%, sys=2.50%, ctx=1165, majf=0, minf=0 00:23:09.265 IO depths : 1=0.1%, 2=0.5%, 4=1.8%, 8=82.3%, 16=15.5%, 32=0.0%, >=64=0.0% 00:23:09.265 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:09.265 complete : 0=0.0%, 4=87.1%, 8=12.5%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:09.265 issued rwts: total=2432,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:09.265 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:09.265 filename1: (groupid=0, jobs=1): err= 0: pid=83392: Thu Dec 5 19:43:00 2024 00:23:09.265 read: IOPS=239, BW=957KiB/s (980kB/s)(9592KiB/10020msec) 00:23:09.265 slat (usec): min=4, max=6062, avg=19.49, stdev=169.36 00:23:09.265 clat (msec): min=4, max=143, avg=66.73, stdev=24.08 00:23:09.265 lat (msec): min=4, max=143, avg=66.75, stdev=24.08 00:23:09.265 clat percentiles (msec): 00:23:09.265 | 1.00th=[ 7], 5.00th=[ 24], 10.00th=[ 41], 20.00th=[ 48], 00:23:09.265 | 30.00th=[ 55], 40.00th=[ 64], 50.00th=[ 70], 60.00th=[ 72], 00:23:09.265 | 70.00th=[ 77], 80.00th=[ 82], 90.00th=[ 105], 95.00th=[ 109], 00:23:09.265 | 99.00th=[ 120], 99.50th=[ 121], 99.90th=[ 138], 99.95th=[ 144], 00:23:09.265 | 99.99th=[ 144] 00:23:09.265 bw ( KiB/s): min= 632, max= 2008, per=4.31%, avg=955.45, stdev=298.82, samples=20 00:23:09.265 iops : min= 158, max= 502, avg=238.85, stdev=74.70, samples=20 00:23:09.265 lat (msec) : 10=2.59%, 20=1.46%, 50=21.56%, 100=62.97%, 250=11.43% 00:23:09.265 cpu : usr=43.40%, sys=2.46%, ctx=1365, majf=0, minf=0 00:23:09.265 IO depths : 1=0.1%, 2=0.4%, 4=1.5%, 8=81.7%, 16=16.4%, 32=0.0%, >=64=0.0% 00:23:09.265 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:09.265 complete : 0=0.0%, 4=87.8%, 8=11.9%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:09.265 issued rwts: total=2398,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:09.265 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:09.265 filename1: (groupid=0, jobs=1): err= 0: pid=83393: Thu Dec 5 19:43:00 2024 00:23:09.265 read: IOPS=215, BW=860KiB/s (881kB/s)(8620KiB/10019msec) 00:23:09.265 slat (usec): min=7, max=4025, avg=15.56, stdev=98.34 00:23:09.265 clat (msec): min=4, max=163, avg=74.22, stdev=28.34 00:23:09.265 lat (msec): min=4, max=163, avg=74.24, stdev=28.34 00:23:09.265 clat percentiles (msec): 00:23:09.265 | 1.00th=[ 7], 5.00th=[ 20], 10.00th=[ 40], 20.00th=[ 55], 00:23:09.265 | 30.00th=[ 67], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 77], 00:23:09.265 | 70.00th=[ 84], 80.00th=[ 99], 90.00th=[ 109], 95.00th=[ 123], 00:23:09.265 | 99.00th=[ 146], 99.50th=[ 148], 99.90th=[ 155], 99.95th=[ 163], 00:23:09.265 | 99.99th=[ 163] 00:23:09.265 bw ( KiB/s): min= 528, max= 1908, per=3.87%, avg=857.60, stdev=307.86, samples=20 00:23:09.265 iops : min= 132, max= 477, avg=214.35, stdev=76.99, samples=20 00:23:09.265 lat (msec) : 10=2.23%, 20=2.97%, 50=11.55%, 100=64.32%, 250=18.93% 00:23:09.265 cpu : usr=40.11%, sys=2.40%, ctx=1269, majf=0, minf=1 00:23:09.265 IO depths : 1=0.1%, 2=3.7%, 4=14.8%, 8=67.3%, 16=14.2%, 32=0.0%, >=64=0.0% 00:23:09.265 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:09.265 complete : 0=0.0%, 4=91.6%, 8=5.2%, 16=3.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:09.265 issued rwts: total=2155,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:09.265 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:09.265 filename1: (groupid=0, jobs=1): err= 0: pid=83394: Thu Dec 5 19:43:00 2024 00:23:09.265 read: IOPS=224, BW=899KiB/s (921kB/s)(9016KiB/10024msec) 00:23:09.265 slat (usec): min=5, max=8024, avg=21.79, stdev=242.31 00:23:09.265 clat (msec): min=22, max=132, avg=71.02, stdev=20.97 00:23:09.265 lat (msec): min=22, max=132, avg=71.04, stdev=20.96 00:23:09.265 clat percentiles (msec): 00:23:09.265 | 1.00th=[ 24], 5.00th=[ 36], 10.00th=[ 47], 20.00th=[ 53], 00:23:09.265 | 30.00th=[ 63], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 73], 00:23:09.265 | 70.00th=[ 79], 80.00th=[ 85], 90.00th=[ 105], 95.00th=[ 109], 00:23:09.265 | 99.00th=[ 121], 99.50th=[ 124], 99.90th=[ 124], 99.95th=[ 132], 00:23:09.265 | 99.99th=[ 133] 00:23:09.265 bw ( KiB/s): min= 664, max= 1272, per=4.05%, avg=896.90, stdev=174.17, samples=20 00:23:09.265 iops : min= 166, max= 318, avg=224.20, stdev=43.52, samples=20 00:23:09.265 lat (msec) : 50=18.94%, 100=69.12%, 250=11.93% 00:23:09.265 cpu : usr=35.22%, sys=2.39%, ctx=1467, majf=0, minf=9 00:23:09.265 IO depths : 1=0.1%, 2=1.1%, 4=3.9%, 8=78.8%, 16=16.1%, 32=0.0%, >=64=0.0% 00:23:09.265 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:09.265 complete : 0=0.0%, 4=88.6%, 8=10.5%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:09.265 issued rwts: total=2254,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:09.265 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:09.265 filename1: (groupid=0, jobs=1): err= 0: pid=83395: Thu Dec 5 19:43:00 2024 00:23:09.265 read: IOPS=216, BW=864KiB/s (885kB/s)(8660KiB/10018msec) 00:23:09.266 slat (usec): min=8, max=8037, avg=18.36, stdev=172.55 00:23:09.266 clat (msec): min=18, max=156, avg=73.87, stdev=24.80 00:23:09.266 lat (msec): min=18, max=156, avg=73.89, stdev=24.80 00:23:09.266 clat percentiles (msec): 00:23:09.266 | 1.00th=[ 24], 5.00th=[ 31], 10.00th=[ 45], 20.00th=[ 53], 00:23:09.266 | 30.00th=[ 64], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 75], 00:23:09.266 | 70.00th=[ 82], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 111], 00:23:09.266 | 99.00th=[ 144], 99.50th=[ 153], 99.90th=[ 155], 99.95th=[ 157], 00:23:09.266 | 99.99th=[ 157] 00:23:09.266 bw ( KiB/s): min= 528, max= 1400, per=3.89%, avg=861.70, stdev=239.08, samples=20 00:23:09.266 iops : min= 132, max= 350, avg=215.35, stdev=59.73, samples=20 00:23:09.266 lat (msec) : 20=0.42%, 50=17.00%, 100=67.11%, 250=15.47% 00:23:09.266 cpu : usr=37.78%, sys=2.36%, ctx=1134, majf=0, minf=9 00:23:09.266 IO depths : 1=0.1%, 2=3.2%, 4=13.0%, 8=69.5%, 16=14.3%, 32=0.0%, >=64=0.0% 00:23:09.266 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:09.266 complete : 0=0.0%, 4=90.8%, 8=6.3%, 16=2.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:09.266 issued rwts: total=2165,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:09.266 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:09.266 filename1: (groupid=0, jobs=1): err= 0: pid=83396: Thu Dec 5 19:43:00 2024 00:23:09.266 read: IOPS=230, BW=924KiB/s (946kB/s)(9244KiB/10009msec) 00:23:09.266 slat (usec): min=4, max=8035, avg=18.80, stdev=166.91 00:23:09.266 clat (msec): min=14, max=150, avg=69.19, stdev=22.73 00:23:09.266 lat (msec): min=14, max=150, avg=69.20, stdev=22.73 00:23:09.266 clat percentiles (msec): 00:23:09.266 | 1.00th=[ 23], 5.00th=[ 36], 10.00th=[ 45], 20.00th=[ 48], 00:23:09.266 | 30.00th=[ 56], 40.00th=[ 65], 50.00th=[ 70], 60.00th=[ 72], 00:23:09.266 | 70.00th=[ 75], 80.00th=[ 84], 90.00th=[ 108], 95.00th=[ 109], 00:23:09.266 | 99.00th=[ 127], 99.50th=[ 133], 99.90th=[ 144], 99.95th=[ 150], 00:23:09.266 | 99.99th=[ 150] 00:23:09.266 bw ( KiB/s): min= 528, max= 1138, per=4.03%, avg=893.16, stdev=186.59, samples=19 00:23:09.266 iops : min= 132, max= 284, avg=223.26, stdev=46.61, samples=19 00:23:09.266 lat (msec) : 20=0.26%, 50=24.06%, 100=62.87%, 250=12.81% 00:23:09.266 cpu : usr=38.89%, sys=2.10%, ctx=1132, majf=0, minf=9 00:23:09.266 IO depths : 1=0.1%, 2=1.6%, 4=6.5%, 8=76.8%, 16=15.0%, 32=0.0%, >=64=0.0% 00:23:09.266 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:09.266 complete : 0=0.0%, 4=88.7%, 8=9.9%, 16=1.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:09.266 issued rwts: total=2311,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:09.266 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:09.266 filename2: (groupid=0, jobs=1): err= 0: pid=83397: Thu Dec 5 19:43:00 2024 00:23:09.266 read: IOPS=226, BW=904KiB/s (926kB/s)(9056KiB/10017msec) 00:23:09.266 slat (usec): min=3, max=6299, avg=27.20, stdev=238.71 00:23:09.266 clat (msec): min=15, max=159, avg=70.65, stdev=22.93 00:23:09.266 lat (msec): min=15, max=159, avg=70.67, stdev=22.94 00:23:09.266 clat percentiles (msec): 00:23:09.266 | 1.00th=[ 24], 5.00th=[ 36], 10.00th=[ 45], 20.00th=[ 50], 00:23:09.266 | 30.00th=[ 61], 40.00th=[ 68], 50.00th=[ 71], 60.00th=[ 72], 00:23:09.266 | 70.00th=[ 79], 80.00th=[ 84], 90.00th=[ 105], 95.00th=[ 112], 00:23:09.266 | 99.00th=[ 131], 99.50th=[ 161], 99.90th=[ 161], 99.95th=[ 161], 00:23:09.266 | 99.99th=[ 161] 00:23:09.266 bw ( KiB/s): min= 496, max= 1320, per=4.06%, avg=899.60, stdev=197.11, samples=20 00:23:09.266 iops : min= 124, max= 330, avg=224.80, stdev=49.20, samples=20 00:23:09.266 lat (msec) : 20=0.57%, 50=20.01%, 100=66.74%, 250=12.68% 00:23:09.266 cpu : usr=38.02%, sys=2.56%, ctx=1211, majf=0, minf=9 00:23:09.266 IO depths : 1=0.1%, 2=1.4%, 4=5.6%, 8=77.4%, 16=15.5%, 32=0.0%, >=64=0.0% 00:23:09.266 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:09.266 complete : 0=0.0%, 4=88.8%, 8=10.0%, 16=1.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:09.266 issued rwts: total=2264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:09.266 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:09.266 filename2: (groupid=0, jobs=1): err= 0: pid=83398: Thu Dec 5 19:43:00 2024 00:23:09.266 read: IOPS=217, BW=869KiB/s (890kB/s)(8692KiB/10001msec) 00:23:09.266 slat (usec): min=5, max=4021, avg=15.27, stdev=86.15 00:23:09.266 clat (usec): min=470, max=156109, avg=73539.32, stdev=32036.20 00:23:09.266 lat (usec): min=489, max=156128, avg=73554.59, stdev=32037.68 00:23:09.266 clat percentiles (usec): 00:23:09.266 | 1.00th=[ 1418], 5.00th=[ 3458], 10.00th=[ 23462], 20.00th=[ 52691], 00:23:09.266 | 30.00th=[ 67634], 40.00th=[ 71828], 50.00th=[ 73925], 60.00th=[ 79168], 00:23:09.266 | 70.00th=[ 86508], 80.00th=[100140], 90.00th=[108528], 95.00th=[120062], 00:23:09.266 | 99.00th=[143655], 99.50th=[143655], 99.90th=[156238], 99.95th=[156238], 00:23:09.266 | 99.99th=[156238] 00:23:09.266 bw ( KiB/s): min= 512, max= 1080, per=3.43%, avg=760.74, stdev=164.49, samples=19 00:23:09.266 iops : min= 128, max= 270, avg=190.16, stdev=41.10, samples=19 00:23:09.266 lat (usec) : 500=0.14% 00:23:09.266 lat (msec) : 2=4.10%, 4=1.47%, 10=2.07%, 20=1.61%, 50=10.12% 00:23:09.266 lat (msec) : 100=60.33%, 250=20.16% 00:23:09.266 cpu : usr=39.08%, sys=2.23%, ctx=1315, majf=0, minf=0 00:23:09.266 IO depths : 1=0.1%, 2=4.4%, 4=17.4%, 8=64.5%, 16=13.7%, 32=0.0%, >=64=0.0% 00:23:09.266 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:09.266 complete : 0=0.0%, 4=92.2%, 8=4.0%, 16=3.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:09.266 issued rwts: total=2173,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:09.266 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:09.266 filename2: (groupid=0, jobs=1): err= 0: pid=83399: Thu Dec 5 19:43:00 2024 00:23:09.266 read: IOPS=232, BW=931KiB/s (953kB/s)(9320KiB/10012msec) 00:23:09.266 slat (usec): min=8, max=8036, avg=36.07, stdev=414.60 00:23:09.266 clat (msec): min=9, max=133, avg=68.60, stdev=21.62 00:23:09.266 lat (msec): min=9, max=133, avg=68.63, stdev=21.63 00:23:09.266 clat percentiles (msec): 00:23:09.266 | 1.00th=[ 19], 5.00th=[ 36], 10.00th=[ 46], 20.00th=[ 48], 00:23:09.266 | 30.00th=[ 59], 40.00th=[ 63], 50.00th=[ 71], 60.00th=[ 72], 00:23:09.266 | 70.00th=[ 74], 80.00th=[ 84], 90.00th=[ 107], 95.00th=[ 108], 00:23:09.266 | 99.00th=[ 118], 99.50th=[ 121], 99.90th=[ 132], 99.95th=[ 132], 00:23:09.266 | 99.99th=[ 134] 00:23:09.266 bw ( KiB/s): min= 608, max= 1112, per=4.05%, avg=896.42, stdev=156.26, samples=19 00:23:09.266 iops : min= 152, max= 278, avg=224.11, stdev=39.06, samples=19 00:23:09.266 lat (msec) : 10=0.26%, 20=0.94%, 50=23.09%, 100=64.42%, 250=11.29% 00:23:09.266 cpu : usr=31.40%, sys=1.93%, ctx=883, majf=0, minf=9 00:23:09.266 IO depths : 1=0.1%, 2=0.2%, 4=0.7%, 8=82.7%, 16=16.4%, 32=0.0%, >=64=0.0% 00:23:09.266 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:09.266 complete : 0=0.0%, 4=87.5%, 8=12.3%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:09.266 issued rwts: total=2330,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:09.266 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:09.266 filename2: (groupid=0, jobs=1): err= 0: pid=83400: Thu Dec 5 19:43:00 2024 00:23:09.266 read: IOPS=236, BW=947KiB/s (970kB/s)(9484KiB/10013msec) 00:23:09.266 slat (usec): min=4, max=8028, avg=30.06, stdev=318.42 00:23:09.266 clat (msec): min=20, max=131, avg=67.43, stdev=20.88 00:23:09.266 lat (msec): min=20, max=131, avg=67.46, stdev=20.89 00:23:09.266 clat percentiles (msec): 00:23:09.266 | 1.00th=[ 24], 5.00th=[ 36], 10.00th=[ 47], 20.00th=[ 48], 00:23:09.266 | 30.00th=[ 56], 40.00th=[ 61], 50.00th=[ 71], 60.00th=[ 72], 00:23:09.266 | 70.00th=[ 73], 80.00th=[ 84], 90.00th=[ 101], 95.00th=[ 108], 00:23:09.266 | 99.00th=[ 116], 99.50th=[ 120], 99.90th=[ 121], 99.95th=[ 121], 00:23:09.266 | 99.99th=[ 132] 00:23:09.266 bw ( KiB/s): min= 664, max= 1376, per=4.25%, avg=942.40, stdev=180.42, samples=20 00:23:09.266 iops : min= 166, max= 344, avg=235.50, stdev=45.03, samples=20 00:23:09.266 lat (msec) : 50=27.41%, 100=62.59%, 250=10.00% 00:23:09.266 cpu : usr=35.37%, sys=2.10%, ctx=967, majf=0, minf=9 00:23:09.266 IO depths : 1=0.1%, 2=0.2%, 4=0.8%, 8=82.9%, 16=16.1%, 32=0.0%, >=64=0.0% 00:23:09.266 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:09.266 complete : 0=0.0%, 4=87.3%, 8=12.6%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:09.266 issued rwts: total=2371,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:09.266 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:09.266 filename2: (groupid=0, jobs=1): err= 0: pid=83401: Thu Dec 5 19:43:00 2024 00:23:09.266 read: IOPS=233, BW=935KiB/s (957kB/s)(9368KiB/10019msec) 00:23:09.266 slat (usec): min=6, max=8035, avg=27.32, stdev=298.58 00:23:09.266 clat (msec): min=18, max=133, avg=68.31, stdev=21.12 00:23:09.266 lat (msec): min=18, max=133, avg=68.34, stdev=21.12 00:23:09.266 clat percentiles (msec): 00:23:09.266 | 1.00th=[ 24], 5.00th=[ 35], 10.00th=[ 44], 20.00th=[ 48], 00:23:09.266 | 30.00th=[ 58], 40.00th=[ 64], 50.00th=[ 71], 60.00th=[ 72], 00:23:09.266 | 70.00th=[ 75], 80.00th=[ 84], 90.00th=[ 103], 95.00th=[ 109], 00:23:09.266 | 99.00th=[ 120], 99.50th=[ 121], 99.90th=[ 130], 99.95th=[ 133], 00:23:09.266 | 99.99th=[ 133] 00:23:09.266 bw ( KiB/s): min= 664, max= 1328, per=4.20%, avg=930.40, stdev=181.63, samples=20 00:23:09.266 iops : min= 166, max= 332, avg=232.60, stdev=45.41, samples=20 00:23:09.266 lat (msec) : 20=0.26%, 50=23.61%, 100=65.54%, 250=10.59% 00:23:09.266 cpu : usr=37.10%, sys=2.12%, ctx=1236, majf=0, minf=9 00:23:09.266 IO depths : 1=0.1%, 2=0.8%, 4=3.2%, 8=80.2%, 16=15.8%, 32=0.0%, >=64=0.0% 00:23:09.266 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:09.266 complete : 0=0.0%, 4=87.9%, 8=11.4%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:09.266 issued rwts: total=2342,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:09.266 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:09.266 filename2: (groupid=0, jobs=1): err= 0: pid=83402: Thu Dec 5 19:43:00 2024 00:23:09.266 read: IOPS=237, BW=950KiB/s (973kB/s)(9516KiB/10014msec) 00:23:09.266 slat (usec): min=5, max=8029, avg=21.68, stdev=232.38 00:23:09.266 clat (msec): min=9, max=143, avg=67.23, stdev=21.70 00:23:09.266 lat (msec): min=9, max=143, avg=67.25, stdev=21.69 00:23:09.266 clat percentiles (msec): 00:23:09.266 | 1.00th=[ 20], 5.00th=[ 35], 10.00th=[ 40], 20.00th=[ 48], 00:23:09.266 | 30.00th=[ 57], 40.00th=[ 61], 50.00th=[ 71], 60.00th=[ 72], 00:23:09.266 | 70.00th=[ 74], 80.00th=[ 84], 90.00th=[ 100], 95.00th=[ 109], 00:23:09.266 | 99.00th=[ 120], 99.50th=[ 121], 99.90th=[ 124], 99.95th=[ 129], 00:23:09.266 | 99.99th=[ 144] 00:23:09.266 bw ( KiB/s): min= 664, max= 1176, per=4.15%, avg=918.84, stdev=153.85, samples=19 00:23:09.266 iops : min= 166, max= 294, avg=229.68, stdev=38.44, samples=19 00:23:09.267 lat (msec) : 10=0.08%, 20=1.09%, 50=26.23%, 100=63.14%, 250=9.46% 00:23:09.267 cpu : usr=33.70%, sys=2.09%, ctx=947, majf=0, minf=9 00:23:09.267 IO depths : 1=0.1%, 2=0.3%, 4=1.2%, 8=82.5%, 16=16.0%, 32=0.0%, >=64=0.0% 00:23:09.267 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:09.267 complete : 0=0.0%, 4=87.3%, 8=12.4%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:09.267 issued rwts: total=2379,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:09.267 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:09.267 filename2: (groupid=0, jobs=1): err= 0: pid=83403: Thu Dec 5 19:43:00 2024 00:23:09.267 read: IOPS=227, BW=912KiB/s (934kB/s)(9140KiB/10024msec) 00:23:09.267 slat (usec): min=6, max=10033, avg=32.06, stdev=395.16 00:23:09.267 clat (msec): min=19, max=146, avg=69.98, stdev=22.49 00:23:09.267 lat (msec): min=20, max=146, avg=70.01, stdev=22.49 00:23:09.267 clat percentiles (msec): 00:23:09.267 | 1.00th=[ 24], 5.00th=[ 36], 10.00th=[ 44], 20.00th=[ 49], 00:23:09.267 | 30.00th=[ 56], 40.00th=[ 65], 50.00th=[ 71], 60.00th=[ 72], 00:23:09.267 | 70.00th=[ 79], 80.00th=[ 87], 90.00th=[ 108], 95.00th=[ 109], 00:23:09.267 | 99.00th=[ 121], 99.50th=[ 125], 99.90th=[ 132], 99.95th=[ 144], 00:23:09.267 | 99.99th=[ 148] 00:23:09.267 bw ( KiB/s): min= 568, max= 1248, per=4.10%, avg=909.30, stdev=202.71, samples=20 00:23:09.267 iops : min= 142, max= 312, avg=227.30, stdev=50.66, samples=20 00:23:09.267 lat (msec) : 20=0.04%, 50=23.06%, 100=64.46%, 250=12.43% 00:23:09.267 cpu : usr=36.84%, sys=2.02%, ctx=1135, majf=0, minf=9 00:23:09.267 IO depths : 1=0.1%, 2=0.4%, 4=1.2%, 8=81.8%, 16=16.6%, 32=0.0%, >=64=0.0% 00:23:09.267 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:09.267 complete : 0=0.0%, 4=87.8%, 8=11.9%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:09.267 issued rwts: total=2285,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:09.267 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:09.267 filename2: (groupid=0, jobs=1): err= 0: pid=83404: Thu Dec 5 19:43:00 2024 00:23:09.267 read: IOPS=219, BW=876KiB/s (897kB/s)(8764KiB/10004msec) 00:23:09.267 slat (usec): min=5, max=10028, avg=20.39, stdev=230.63 00:23:09.267 clat (msec): min=2, max=155, avg=72.89, stdev=26.22 00:23:09.267 lat (msec): min=2, max=155, avg=72.91, stdev=26.22 00:23:09.267 clat percentiles (msec): 00:23:09.267 | 1.00th=[ 6], 5.00th=[ 24], 10.00th=[ 45], 20.00th=[ 52], 00:23:09.267 | 30.00th=[ 64], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 75], 00:23:09.267 | 70.00th=[ 82], 80.00th=[ 92], 90.00th=[ 108], 95.00th=[ 120], 00:23:09.267 | 99.00th=[ 136], 99.50th=[ 144], 99.90th=[ 155], 99.95th=[ 155], 00:23:09.267 | 99.99th=[ 155] 00:23:09.267 bw ( KiB/s): min= 512, max= 1024, per=3.68%, avg=815.68, stdev=183.53, samples=19 00:23:09.267 iops : min= 128, max= 256, avg=203.89, stdev=45.91, samples=19 00:23:09.267 lat (msec) : 4=0.18%, 10=2.01%, 20=1.60%, 50=15.06%, 100=65.95% 00:23:09.267 lat (msec) : 250=15.20% 00:23:09.267 cpu : usr=40.20%, sys=2.41%, ctx=1277, majf=0, minf=9 00:23:09.267 IO depths : 1=0.1%, 2=3.3%, 4=13.1%, 8=69.5%, 16=14.1%, 32=0.0%, >=64=0.0% 00:23:09.267 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:09.267 complete : 0=0.0%, 4=90.7%, 8=6.4%, 16=2.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:09.267 issued rwts: total=2191,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:09.267 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:09.267 00:23:09.267 Run status group 0 (all jobs): 00:23:09.267 READ: bw=21.6MiB/s (22.7MB/s), 846KiB/s-984KiB/s (866kB/s-1008kB/s), io=217MiB (227MB), run=10001-10032msec 00:23:09.267 19:43:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:23:09.267 19:43:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:23:09.267 19:43:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:23:09.267 19:43:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:09.267 19:43:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:23:09.267 19:43:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:09.267 19:43:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.267 19:43:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:09.267 19:43:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.267 19:43:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:09.267 19:43:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.267 19:43:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:09.267 19:43:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.267 19:43:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:23:09.267 19:43:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:23:09.267 19:43:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:23:09.267 19:43:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:09.267 19:43:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.267 19:43:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:09.267 19:43:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.267 19:43:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:23:09.267 19:43:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.267 19:43:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:09.267 19:43:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.267 19:43:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:23:09.267 19:43:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:23:09.267 19:43:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:23:09.267 19:43:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:23:09.267 19:43:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.267 19:43:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:09.267 19:43:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.267 19:43:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:23:09.267 19:43:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.267 19:43:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:09.267 19:43:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.267 19:43:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:23:09.267 19:43:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:23:09.267 19:43:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:23:09.267 19:43:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:23:09.267 19:43:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:23:09.267 19:43:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:23:09.267 19:43:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:23:09.267 19:43:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:23:09.267 19:43:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:09.267 19:43:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:23:09.267 19:43:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:23:09.267 19:43:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:23:09.267 19:43:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.267 19:43:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:09.267 bdev_null0 00:23:09.267 19:43:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.267 19:43:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:09.267 19:43:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.267 19:43:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:09.267 19:43:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.267 19:43:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:09.267 19:43:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.267 19:43:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:09.267 19:43:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.267 19:43:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:23:09.267 19:43:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.267 19:43:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:09.267 [2024-12-05 19:43:01.052652] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:09.267 19:43:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.267 19:43:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:09.267 19:43:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:23:09.267 19:43:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:23:09.267 19:43:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:23:09.267 19:43:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.267 19:43:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:09.267 bdev_null1 00:23:09.267 19:43:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.267 19:43:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:23:09.267 19:43:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.267 19:43:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:09.268 19:43:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.268 19:43:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:23:09.268 19:43:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.268 19:43:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:09.268 19:43:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.268 19:43:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:09.268 19:43:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.268 19:43:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:09.268 19:43:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.268 19:43:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:23:09.268 19:43:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:23:09.268 19:43:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:23:09.268 19:43:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:09.268 19:43:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:23:09.268 19:43:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:09.268 19:43:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:23:09.268 19:43:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:23:09.268 19:43:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:09.268 19:43:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:09.268 19:43:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:23:09.268 19:43:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:23:09.268 19:43:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:09.268 { 00:23:09.268 "params": { 00:23:09.268 "name": "Nvme$subsystem", 00:23:09.268 "trtype": "$TEST_TRANSPORT", 00:23:09.268 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:09.268 "adrfam": "ipv4", 00:23:09.268 "trsvcid": "$NVMF_PORT", 00:23:09.268 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:09.268 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:09.268 "hdgst": ${hdgst:-false}, 00:23:09.268 "ddgst": ${ddgst:-false} 00:23:09.268 }, 00:23:09.268 "method": "bdev_nvme_attach_controller" 00:23:09.268 } 00:23:09.268 EOF 00:23:09.268 )") 00:23:09.268 19:43:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:09.268 19:43:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:23:09.268 19:43:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:23:09.268 19:43:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:23:09.268 19:43:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:23:09.268 19:43:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:09.268 19:43:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:23:09.268 19:43:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:09.268 19:43:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:23:09.268 19:43:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:09.268 19:43:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:23:09.268 19:43:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:09.268 19:43:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:09.268 19:43:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:23:09.268 19:43:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:09.268 { 00:23:09.268 "params": { 00:23:09.268 "name": "Nvme$subsystem", 00:23:09.268 "trtype": "$TEST_TRANSPORT", 00:23:09.268 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:09.268 "adrfam": "ipv4", 00:23:09.268 "trsvcid": "$NVMF_PORT", 00:23:09.268 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:09.268 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:09.268 "hdgst": ${hdgst:-false}, 00:23:09.268 "ddgst": ${ddgst:-false} 00:23:09.268 }, 00:23:09.268 "method": "bdev_nvme_attach_controller" 00:23:09.268 } 00:23:09.268 EOF 00:23:09.268 )") 00:23:09.268 19:43:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:23:09.268 19:43:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:23:09.268 19:43:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:09.268 19:43:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:23:09.268 19:43:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:23:09.268 19:43:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:09.268 "params": { 00:23:09.268 "name": "Nvme0", 00:23:09.268 "trtype": "tcp", 00:23:09.268 "traddr": "10.0.0.3", 00:23:09.268 "adrfam": "ipv4", 00:23:09.268 "trsvcid": "4420", 00:23:09.268 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:09.268 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:09.268 "hdgst": false, 00:23:09.268 "ddgst": false 00:23:09.268 }, 00:23:09.268 "method": "bdev_nvme_attach_controller" 00:23:09.268 },{ 00:23:09.268 "params": { 00:23:09.268 "name": "Nvme1", 00:23:09.268 "trtype": "tcp", 00:23:09.268 "traddr": "10.0.0.3", 00:23:09.268 "adrfam": "ipv4", 00:23:09.268 "trsvcid": "4420", 00:23:09.268 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:09.268 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:09.268 "hdgst": false, 00:23:09.268 "ddgst": false 00:23:09.268 }, 00:23:09.268 "method": "bdev_nvme_attach_controller" 00:23:09.268 }' 00:23:09.268 19:43:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:09.268 19:43:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:09.268 19:43:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:09.268 19:43:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:23:09.268 19:43:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:09.268 19:43:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:09.268 19:43:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:09.268 19:43:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:09.268 19:43:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:09.268 19:43:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:09.268 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:23:09.268 ... 00:23:09.268 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:23:09.268 ... 00:23:09.268 fio-3.35 00:23:09.268 Starting 4 threads 00:23:14.533 00:23:14.533 filename0: (groupid=0, jobs=1): err= 0: pid=83538: Thu Dec 5 19:43:06 2024 00:23:14.533 read: IOPS=2198, BW=17.2MiB/s (18.0MB/s)(85.9MiB/5003msec) 00:23:14.533 slat (usec): min=7, max=263, avg=11.61, stdev= 4.85 00:23:14.533 clat (usec): min=670, max=7250, avg=3607.13, stdev=1033.33 00:23:14.533 lat (usec): min=679, max=7263, avg=3618.74, stdev=1033.61 00:23:14.533 clat percentiles (usec): 00:23:14.533 | 1.00th=[ 1401], 5.00th=[ 1434], 10.00th=[ 2024], 20.00th=[ 2966], 00:23:14.533 | 30.00th=[ 3294], 40.00th=[ 3359], 50.00th=[ 3752], 60.00th=[ 3884], 00:23:14.533 | 70.00th=[ 4113], 80.00th=[ 4490], 90.00th=[ 4948], 95.00th=[ 5145], 00:23:14.533 | 99.00th=[ 5276], 99.50th=[ 5407], 99.90th=[ 6325], 99.95th=[ 6915], 00:23:14.533 | 99.99th=[ 7111] 00:23:14.533 bw ( KiB/s): min=16160, max=20592, per=27.14%, avg=17939.56, stdev=1391.90, samples=9 00:23:14.533 iops : min= 2020, max= 2574, avg=2242.44, stdev=173.99, samples=9 00:23:14.533 lat (usec) : 750=0.02%, 1000=0.06% 00:23:14.533 lat (msec) : 2=9.27%, 4=54.48%, 10=36.17% 00:23:14.533 cpu : usr=91.34%, sys=7.34%, ctx=93, majf=0, minf=0 00:23:14.533 IO depths : 1=0.1%, 2=3.6%, 4=64.3%, 8=32.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:14.533 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:14.533 complete : 0=0.0%, 4=98.6%, 8=1.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:14.533 issued rwts: total=10998,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:14.533 latency : target=0, window=0, percentile=100.00%, depth=8 00:23:14.533 filename0: (groupid=0, jobs=1): err= 0: pid=83539: Thu Dec 5 19:43:06 2024 00:23:14.533 read: IOPS=2058, BW=16.1MiB/s (16.9MB/s)(80.4MiB/5001msec) 00:23:14.533 slat (nsec): min=3970, max=55192, avg=15368.82, stdev=3574.88 00:23:14.533 clat (usec): min=959, max=6468, avg=3840.30, stdev=836.73 00:23:14.533 lat (usec): min=968, max=6482, avg=3855.67, stdev=836.65 00:23:14.533 clat percentiles (usec): 00:23:14.533 | 1.00th=[ 1631], 5.00th=[ 2114], 10.00th=[ 2671], 20.00th=[ 3294], 00:23:14.533 | 30.00th=[ 3326], 40.00th=[ 3752], 50.00th=[ 3884], 60.00th=[ 4178], 00:23:14.533 | 70.00th=[ 4293], 80.00th=[ 4555], 90.00th=[ 4883], 95.00th=[ 5145], 00:23:14.533 | 99.00th=[ 5276], 99.50th=[ 5342], 99.90th=[ 6128], 99.95th=[ 6259], 00:23:14.533 | 99.99th=[ 6456] 00:23:14.533 bw ( KiB/s): min=14976, max=18016, per=24.79%, avg=16387.78, stdev=1039.26, samples=9 00:23:14.533 iops : min= 1872, max= 2252, avg=2048.44, stdev=129.88, samples=9 00:23:14.533 lat (usec) : 1000=0.10% 00:23:14.533 lat (msec) : 2=2.60%, 4=50.34%, 10=46.95% 00:23:14.533 cpu : usr=91.86%, sys=7.28%, ctx=6, majf=0, minf=1 00:23:14.534 IO depths : 1=0.1%, 2=8.4%, 4=61.9%, 8=29.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:14.534 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:14.534 complete : 0=0.0%, 4=96.7%, 8=3.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:14.534 issued rwts: total=10295,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:14.534 latency : target=0, window=0, percentile=100.00%, depth=8 00:23:14.534 filename1: (groupid=0, jobs=1): err= 0: pid=83540: Thu Dec 5 19:43:06 2024 00:23:14.534 read: IOPS=1996, BW=15.6MiB/s (16.4MB/s)(78.0MiB/5002msec) 00:23:14.534 slat (nsec): min=3980, max=54361, avg=15153.59, stdev=4068.00 00:23:14.534 clat (usec): min=747, max=7670, avg=3960.63, stdev=778.42 00:23:14.534 lat (usec): min=756, max=7684, avg=3975.79, stdev=778.71 00:23:14.534 clat percentiles (usec): 00:23:14.534 | 1.00th=[ 1631], 5.00th=[ 2540], 10.00th=[ 3261], 20.00th=[ 3326], 00:23:14.534 | 30.00th=[ 3392], 40.00th=[ 3818], 50.00th=[ 4113], 60.00th=[ 4228], 00:23:14.534 | 70.00th=[ 4359], 80.00th=[ 4555], 90.00th=[ 4948], 95.00th=[ 5145], 00:23:14.534 | 99.00th=[ 5276], 99.50th=[ 5342], 99.90th=[ 5735], 99.95th=[ 5997], 00:23:14.534 | 99.99th=[ 7701] 00:23:14.534 bw ( KiB/s): min=14976, max=16848, per=23.95%, avg=15832.89, stdev=774.89, samples=9 00:23:14.534 iops : min= 1872, max= 2106, avg=1979.11, stdev=96.86, samples=9 00:23:14.534 lat (usec) : 750=0.01%, 1000=0.01% 00:23:14.534 lat (msec) : 2=1.88%, 4=46.71%, 10=51.39% 00:23:14.534 cpu : usr=91.96%, sys=7.20%, ctx=10, majf=0, minf=0 00:23:14.534 IO depths : 1=0.1%, 2=10.7%, 4=60.8%, 8=28.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:14.534 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:14.534 complete : 0=0.0%, 4=95.8%, 8=4.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:14.534 issued rwts: total=9985,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:14.534 latency : target=0, window=0, percentile=100.00%, depth=8 00:23:14.534 filename1: (groupid=0, jobs=1): err= 0: pid=83541: Thu Dec 5 19:43:06 2024 00:23:14.534 read: IOPS=2010, BW=15.7MiB/s (16.5MB/s)(78.6MiB/5001msec) 00:23:14.534 slat (nsec): min=4059, max=55129, avg=15011.97, stdev=4287.87 00:23:14.534 clat (usec): min=471, max=6423, avg=3931.28, stdev=810.91 00:23:14.534 lat (usec): min=483, max=6438, avg=3946.30, stdev=810.23 00:23:14.534 clat percentiles (usec): 00:23:14.534 | 1.00th=[ 1631], 5.00th=[ 2212], 10.00th=[ 3032], 20.00th=[ 3294], 00:23:14.534 | 30.00th=[ 3359], 40.00th=[ 3818], 50.00th=[ 4015], 60.00th=[ 4228], 00:23:14.534 | 70.00th=[ 4359], 80.00th=[ 4555], 90.00th=[ 5014], 95.00th=[ 5145], 00:23:14.534 | 99.00th=[ 5276], 99.50th=[ 5604], 99.90th=[ 6128], 99.95th=[ 6194], 00:23:14.534 | 99.99th=[ 6325] 00:23:14.534 bw ( KiB/s): min=14608, max=17760, per=24.15%, avg=15962.33, stdev=1129.12, samples=9 00:23:14.534 iops : min= 1826, max= 2220, avg=1995.22, stdev=141.22, samples=9 00:23:14.534 lat (usec) : 500=0.01%, 1000=0.04% 00:23:14.534 lat (msec) : 2=1.88%, 4=47.62%, 10=50.45% 00:23:14.534 cpu : usr=91.66%, sys=7.44%, ctx=12, majf=0, minf=0 00:23:14.534 IO depths : 1=0.1%, 2=10.1%, 4=61.0%, 8=28.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:14.534 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:14.534 complete : 0=0.0%, 4=96.1%, 8=3.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:14.534 issued rwts: total=10057,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:14.534 latency : target=0, window=0, percentile=100.00%, depth=8 00:23:14.534 00:23:14.534 Run status group 0 (all jobs): 00:23:14.534 READ: bw=64.5MiB/s (67.7MB/s), 15.6MiB/s-17.2MiB/s (16.4MB/s-18.0MB/s), io=323MiB (339MB), run=5001-5003msec 00:23:14.534 19:43:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:23:14.534 19:43:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:23:14.534 19:43:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:23:14.534 19:43:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:14.534 19:43:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:23:14.534 19:43:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:14.534 19:43:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.534 19:43:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:14.534 19:43:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.534 19:43:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:14.534 19:43:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.534 19:43:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:14.534 19:43:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.534 19:43:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:23:14.534 19:43:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:23:14.534 19:43:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:23:14.534 19:43:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:14.534 19:43:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.534 19:43:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:14.534 19:43:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.534 19:43:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:23:14.534 19:43:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.534 19:43:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:14.534 19:43:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.534 ************************************ 00:23:14.534 END TEST fio_dif_rand_params 00:23:14.534 ************************************ 00:23:14.534 00:23:14.534 real 0m23.713s 00:23:14.534 user 2m3.260s 00:23:14.534 sys 0m8.831s 00:23:14.534 19:43:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:14.534 19:43:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:14.534 19:43:07 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:23:14.534 19:43:07 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:14.534 19:43:07 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:14.534 19:43:07 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:14.534 ************************************ 00:23:14.534 START TEST fio_dif_digest 00:23:14.534 ************************************ 00:23:14.534 19:43:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:23:14.534 19:43:07 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:23:14.534 19:43:07 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:23:14.534 19:43:07 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:23:14.534 19:43:07 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:23:14.534 19:43:07 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:23:14.534 19:43:07 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:23:14.534 19:43:07 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:23:14.534 19:43:07 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:23:14.534 19:43:07 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:23:14.534 19:43:07 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:23:14.534 19:43:07 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:23:14.534 19:43:07 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:23:14.534 19:43:07 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:23:14.534 19:43:07 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:23:14.534 19:43:07 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:23:14.534 19:43:07 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:23:14.534 19:43:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.534 19:43:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:23:14.534 bdev_null0 00:23:14.534 19:43:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.534 19:43:07 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:14.534 19:43:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.534 19:43:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:23:14.534 19:43:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.534 19:43:07 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:14.534 19:43:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.534 19:43:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:23:14.534 19:43:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.534 19:43:07 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:23:14.534 19:43:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.534 19:43:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:23:14.534 [2024-12-05 19:43:07.261408] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:14.534 19:43:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.534 19:43:07 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:23:14.534 19:43:07 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:23:14.534 19:43:07 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:23:14.534 19:43:07 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:14.534 19:43:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:14.534 19:43:07 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:23:14.534 19:43:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:23:14.534 19:43:07 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:23:14.534 19:43:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:14.534 19:43:07 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:23:14.534 19:43:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:23:14.534 19:43:07 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:14.534 19:43:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:14.534 19:43:07 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:14.534 { 00:23:14.534 "params": { 00:23:14.534 "name": "Nvme$subsystem", 00:23:14.534 "trtype": "$TEST_TRANSPORT", 00:23:14.534 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:14.534 "adrfam": "ipv4", 00:23:14.534 "trsvcid": "$NVMF_PORT", 00:23:14.534 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:14.534 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:14.534 "hdgst": ${hdgst:-false}, 00:23:14.534 "ddgst": ${ddgst:-false} 00:23:14.534 }, 00:23:14.534 "method": "bdev_nvme_attach_controller" 00:23:14.534 } 00:23:14.534 EOF 00:23:14.534 )") 00:23:14.534 19:43:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:23:14.534 19:43:07 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:23:14.534 19:43:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:23:14.534 19:43:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:14.534 19:43:07 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:23:14.534 19:43:07 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:23:14.534 19:43:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:23:14.534 19:43:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:14.534 19:43:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:14.534 19:43:07 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:23:14.534 19:43:07 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:23:14.534 19:43:07 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:23:14.534 19:43:07 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:23:14.534 19:43:07 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:14.534 "params": { 00:23:14.534 "name": "Nvme0", 00:23:14.534 "trtype": "tcp", 00:23:14.534 "traddr": "10.0.0.3", 00:23:14.534 "adrfam": "ipv4", 00:23:14.534 "trsvcid": "4420", 00:23:14.534 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:14.534 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:14.534 "hdgst": true, 00:23:14.534 "ddgst": true 00:23:14.534 }, 00:23:14.534 "method": "bdev_nvme_attach_controller" 00:23:14.534 }' 00:23:14.534 19:43:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:14.534 19:43:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:14.534 19:43:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:14.534 19:43:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:14.534 19:43:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:23:14.534 19:43:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:14.534 19:43:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:14.534 19:43:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:14.534 19:43:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:14.534 19:43:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:14.534 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:23:14.534 ... 00:23:14.534 fio-3.35 00:23:14.534 Starting 3 threads 00:23:26.762 00:23:26.762 filename0: (groupid=0, jobs=1): err= 0: pid=83646: Thu Dec 5 19:43:18 2024 00:23:26.762 read: IOPS=223, BW=27.9MiB/s (29.3MB/s)(279MiB/10005msec) 00:23:26.762 slat (nsec): min=7720, max=35195, avg=13999.06, stdev=2114.65 00:23:26.762 clat (usec): min=10780, max=14426, avg=13398.18, stdev=140.74 00:23:26.762 lat (usec): min=10794, max=14438, avg=13412.18, stdev=140.77 00:23:26.762 clat percentiles (usec): 00:23:26.762 | 1.00th=[13173], 5.00th=[13304], 10.00th=[13304], 20.00th=[13304], 00:23:26.762 | 30.00th=[13304], 40.00th=[13435], 50.00th=[13435], 60.00th=[13435], 00:23:26.762 | 70.00th=[13435], 80.00th=[13435], 90.00th=[13435], 95.00th=[13566], 00:23:26.762 | 99.00th=[13829], 99.50th=[13960], 99.90th=[14353], 99.95th=[14353], 00:23:26.762 | 99.99th=[14484] 00:23:26.762 bw ( KiB/s): min=28416, max=29184, per=33.37%, avg=28618.11, stdev=347.45, samples=19 00:23:26.762 iops : min= 222, max= 228, avg=223.58, stdev= 2.71, samples=19 00:23:26.762 lat (msec) : 20=100.00% 00:23:26.762 cpu : usr=91.89%, sys=7.59%, ctx=17, majf=0, minf=0 00:23:26.762 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:26.762 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:26.762 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:26.762 issued rwts: total=2235,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:26.762 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:26.762 filename0: (groupid=0, jobs=1): err= 0: pid=83647: Thu Dec 5 19:43:18 2024 00:23:26.762 read: IOPS=223, BW=27.9MiB/s (29.3MB/s)(279MiB/10005msec) 00:23:26.762 slat (nsec): min=7619, max=35868, avg=14586.91, stdev=2418.36 00:23:26.762 clat (usec): min=10780, max=14422, avg=13395.99, stdev=140.36 00:23:26.762 lat (usec): min=10794, max=14436, avg=13410.58, stdev=140.49 00:23:26.762 clat percentiles (usec): 00:23:26.762 | 1.00th=[13173], 5.00th=[13304], 10.00th=[13304], 20.00th=[13304], 00:23:26.762 | 30.00th=[13304], 40.00th=[13435], 50.00th=[13435], 60.00th=[13435], 00:23:26.762 | 70.00th=[13435], 80.00th=[13435], 90.00th=[13435], 95.00th=[13566], 00:23:26.762 | 99.00th=[13960], 99.50th=[13960], 99.90th=[14353], 99.95th=[14353], 00:23:26.762 | 99.99th=[14484] 00:23:26.762 bw ( KiB/s): min=28416, max=29184, per=33.37%, avg=28618.11, stdev=347.45, samples=19 00:23:26.762 iops : min= 222, max= 228, avg=223.58, stdev= 2.71, samples=19 00:23:26.762 lat (msec) : 20=100.00% 00:23:26.762 cpu : usr=91.56%, sys=7.89%, ctx=12, majf=0, minf=0 00:23:26.762 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:26.762 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:26.762 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:26.762 issued rwts: total=2235,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:26.762 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:26.762 filename0: (groupid=0, jobs=1): err= 0: pid=83648: Thu Dec 5 19:43:18 2024 00:23:26.762 read: IOPS=223, BW=27.9MiB/s (29.3MB/s)(279MiB/10008msec) 00:23:26.762 slat (nsec): min=7451, max=36707, avg=10486.78, stdev=3096.72 00:23:26.762 clat (usec): min=13043, max=14339, avg=13407.19, stdev=97.89 00:23:26.762 lat (usec): min=13051, max=14369, avg=13417.67, stdev=98.16 00:23:26.762 clat percentiles (usec): 00:23:26.762 | 1.00th=[13173], 5.00th=[13304], 10.00th=[13304], 20.00th=[13304], 00:23:26.762 | 30.00th=[13435], 40.00th=[13435], 50.00th=[13435], 60.00th=[13435], 00:23:26.762 | 70.00th=[13435], 80.00th=[13435], 90.00th=[13435], 95.00th=[13566], 00:23:26.762 | 99.00th=[13960], 99.50th=[13960], 99.90th=[14353], 99.95th=[14353], 00:23:26.762 | 99.99th=[14353] 00:23:26.762 bw ( KiB/s): min=28416, max=29184, per=33.37%, avg=28618.11, stdev=347.45, samples=19 00:23:26.762 iops : min= 222, max= 228, avg=223.58, stdev= 2.71, samples=19 00:23:26.762 lat (msec) : 20=100.00% 00:23:26.762 cpu : usr=91.26%, sys=8.19%, ctx=116, majf=0, minf=0 00:23:26.762 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:26.762 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:26.762 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:26.762 issued rwts: total=2235,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:26.762 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:26.762 00:23:26.762 Run status group 0 (all jobs): 00:23:26.762 READ: bw=83.7MiB/s (87.8MB/s), 27.9MiB/s-27.9MiB/s (29.3MB/s-29.3MB/s), io=838MiB (879MB), run=10005-10008msec 00:23:26.762 19:43:18 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:23:26.762 19:43:18 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:23:26.762 19:43:18 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:23:26.762 19:43:18 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:26.762 19:43:18 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:23:26.762 19:43:18 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:26.762 19:43:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.762 19:43:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:23:26.762 19:43:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.762 19:43:18 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:26.762 19:43:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.762 19:43:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:23:26.762 19:43:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.762 ************************************ 00:23:26.762 END TEST fio_dif_digest 00:23:26.762 ************************************ 00:23:26.762 00:23:26.762 real 0m11.049s 00:23:26.762 user 0m28.161s 00:23:26.762 sys 0m2.642s 00:23:26.762 19:43:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:26.762 19:43:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:23:26.762 19:43:18 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:23:26.762 19:43:18 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:23:26.762 19:43:18 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:26.762 19:43:18 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:23:26.762 19:43:18 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:26.762 19:43:18 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:23:26.762 19:43:18 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:26.762 19:43:18 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:26.762 rmmod nvme_tcp 00:23:26.762 rmmod nvme_fabrics 00:23:26.762 rmmod nvme_keyring 00:23:26.762 19:43:18 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:26.762 19:43:18 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:23:26.762 19:43:18 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:23:26.762 19:43:18 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 82906 ']' 00:23:26.762 19:43:18 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 82906 00:23:26.762 19:43:18 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 82906 ']' 00:23:26.762 19:43:18 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 82906 00:23:26.762 19:43:18 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:23:26.763 19:43:18 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:26.763 19:43:18 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82906 00:23:26.763 killing process with pid 82906 00:23:26.763 19:43:18 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:26.763 19:43:18 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:26.763 19:43:18 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82906' 00:23:26.763 19:43:18 nvmf_dif -- common/autotest_common.sh@973 -- # kill 82906 00:23:26.763 19:43:18 nvmf_dif -- common/autotest_common.sh@978 -- # wait 82906 00:23:26.763 19:43:18 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:23:26.763 19:43:18 nvmf_dif -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:23:26.763 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:26.763 Waiting for block devices as requested 00:23:26.763 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:23:26.763 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:23:26.763 19:43:19 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:26.763 19:43:19 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:26.763 19:43:19 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:23:26.763 19:43:19 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:23:26.763 19:43:19 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:26.763 19:43:19 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:23:26.763 19:43:19 nvmf_dif -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:26.763 19:43:19 nvmf_dif -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:23:26.763 19:43:19 nvmf_dif -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:23:26.763 19:43:19 nvmf_dif -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:23:26.763 19:43:19 nvmf_dif -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:23:26.763 19:43:19 nvmf_dif -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:23:26.763 19:43:19 nvmf_dif -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:23:26.763 19:43:19 nvmf_dif -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:23:26.763 19:43:19 nvmf_dif -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:23:26.763 19:43:19 nvmf_dif -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:23:26.763 19:43:19 nvmf_dif -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:23:26.763 19:43:19 nvmf_dif -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:23:26.763 19:43:19 nvmf_dif -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:23:26.763 19:43:19 nvmf_dif -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:26.763 19:43:19 nvmf_dif -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:26.763 19:43:19 nvmf_dif -- nvmf/common.sh@246 -- # remove_spdk_ns 00:23:26.763 19:43:19 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:26.763 19:43:19 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:23:26.763 19:43:19 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:26.763 19:43:19 nvmf_dif -- nvmf/common.sh@300 -- # return 0 00:23:26.763 00:23:26.763 real 0m59.873s 00:23:26.763 user 3m48.004s 00:23:26.763 sys 0m19.861s 00:23:26.763 19:43:19 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:26.763 ************************************ 00:23:26.763 END TEST nvmf_dif 00:23:26.763 19:43:19 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:26.763 ************************************ 00:23:26.763 19:43:19 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:23:26.763 19:43:19 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:26.763 19:43:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:26.763 19:43:19 -- common/autotest_common.sh@10 -- # set +x 00:23:26.763 ************************************ 00:23:26.763 START TEST nvmf_abort_qd_sizes 00:23:26.763 ************************************ 00:23:26.763 19:43:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:23:26.763 * Looking for test storage... 00:23:26.763 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:23:26.763 19:43:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:26.763 19:43:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lcov --version 00:23:26.763 19:43:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:26.763 19:43:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:26.763 19:43:19 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:26.763 19:43:19 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:26.763 19:43:19 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:26.763 19:43:19 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:23:26.763 19:43:19 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:23:26.763 19:43:19 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:23:26.763 19:43:19 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:23:26.763 19:43:19 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:23:26.763 19:43:19 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:23:26.763 19:43:19 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:23:26.763 19:43:19 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:26.763 19:43:19 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:23:26.763 19:43:19 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:23:26.763 19:43:19 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:26.763 19:43:19 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:26.763 19:43:19 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:23:26.763 19:43:19 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:23:26.763 19:43:19 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:26.763 19:43:19 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:23:26.763 19:43:19 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:23:26.763 19:43:19 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:23:26.763 19:43:19 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:23:26.763 19:43:19 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:26.763 19:43:19 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:23:26.763 19:43:19 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:23:26.763 19:43:19 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:26.763 19:43:19 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:26.763 19:43:19 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:23:26.763 19:43:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:26.763 19:43:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:26.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:26.763 --rc genhtml_branch_coverage=1 00:23:26.763 --rc genhtml_function_coverage=1 00:23:26.763 --rc genhtml_legend=1 00:23:26.763 --rc geninfo_all_blocks=1 00:23:26.763 --rc geninfo_unexecuted_blocks=1 00:23:26.763 00:23:26.763 ' 00:23:26.763 19:43:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:26.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:26.763 --rc genhtml_branch_coverage=1 00:23:26.763 --rc genhtml_function_coverage=1 00:23:26.763 --rc genhtml_legend=1 00:23:26.763 --rc geninfo_all_blocks=1 00:23:26.763 --rc geninfo_unexecuted_blocks=1 00:23:26.763 00:23:26.763 ' 00:23:26.763 19:43:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:26.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:26.763 --rc genhtml_branch_coverage=1 00:23:26.763 --rc genhtml_function_coverage=1 00:23:26.763 --rc genhtml_legend=1 00:23:26.763 --rc geninfo_all_blocks=1 00:23:26.763 --rc geninfo_unexecuted_blocks=1 00:23:26.763 00:23:26.763 ' 00:23:26.763 19:43:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:26.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:26.763 --rc genhtml_branch_coverage=1 00:23:26.763 --rc genhtml_function_coverage=1 00:23:26.763 --rc genhtml_legend=1 00:23:26.763 --rc geninfo_all_blocks=1 00:23:26.763 --rc geninfo_unexecuted_blocks=1 00:23:26.763 00:23:26.763 ' 00:23:26.763 19:43:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:26.763 19:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:23:26.763 19:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:26.763 19:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:26.763 19:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:26.763 19:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:26.763 19:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:26.763 19:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:26.763 19:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:26.763 19:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:26.763 19:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:26.763 19:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:26.763 19:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:23:26.763 19:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:23:26.763 19:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:26.763 19:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:26.763 19:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:26.763 19:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:26.763 19:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:26.763 19:43:19 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:23:26.763 19:43:19 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:26.763 19:43:19 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:26.764 19:43:19 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:26.764 19:43:19 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:26.764 19:43:19 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:26.764 19:43:19 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:26.764 19:43:19 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:23:26.764 19:43:19 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:26.764 19:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:23:26.764 19:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:26.764 19:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:26.764 19:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:26.764 19:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:26.764 19:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:26.764 19:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:26.764 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:26.764 19:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:26.764 19:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:26.764 19:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:26.764 19:43:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:23:26.764 19:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:26.764 19:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:26.764 19:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:26.764 19:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:26.764 19:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:26.764 19:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:26.764 19:43:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:23:26.764 19:43:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:26.764 19:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:23:26.764 19:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:23:26.764 19:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:23:26.764 19:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:23:26.764 19:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:23:26.764 19:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@460 -- # nvmf_veth_init 00:23:26.764 19:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:26.764 19:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:23:26.764 19:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:23:26.764 19:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:23:26.764 19:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:26.764 19:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:23:26.764 19:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:26.764 19:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:23:26.764 19:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:26.764 19:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:23:26.764 19:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:26.764 19:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:26.764 19:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:26.764 19:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:26.764 19:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:26.764 19:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:26.764 19:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:23:26.764 Cannot find device "nvmf_init_br" 00:23:26.764 19:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:23:26.764 19:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:23:26.764 Cannot find device "nvmf_init_br2" 00:23:26.764 19:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:23:26.764 19:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:23:26.764 Cannot find device "nvmf_tgt_br" 00:23:26.764 19:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # true 00:23:26.764 19:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:23:26.764 Cannot find device "nvmf_tgt_br2" 00:23:26.764 19:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # true 00:23:26.764 19:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:23:26.764 Cannot find device "nvmf_init_br" 00:23:26.764 19:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # true 00:23:26.764 19:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:23:26.764 Cannot find device "nvmf_init_br2" 00:23:26.764 19:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # true 00:23:26.764 19:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:23:26.764 Cannot find device "nvmf_tgt_br" 00:23:26.764 19:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # true 00:23:26.764 19:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:23:26.764 Cannot find device "nvmf_tgt_br2" 00:23:26.764 19:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # true 00:23:26.764 19:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:23:26.764 Cannot find device "nvmf_br" 00:23:26.764 19:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # true 00:23:26.764 19:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:23:26.764 Cannot find device "nvmf_init_if" 00:23:26.764 19:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # true 00:23:26.764 19:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:23:26.764 Cannot find device "nvmf_init_if2" 00:23:26.764 19:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # true 00:23:26.764 19:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:26.764 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:26.764 19:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # true 00:23:26.764 19:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:26.764 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:26.764 19:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # true 00:23:26.764 19:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:23:26.764 19:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:26.764 19:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:23:26.764 19:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:26.764 19:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:26.764 19:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:26.764 19:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:26.764 19:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:26.764 19:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:23:26.764 19:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:23:26.764 19:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:23:26.764 19:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:23:26.764 19:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:23:26.764 19:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:23:26.764 19:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:23:26.764 19:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:23:26.764 19:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:23:26.764 19:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:26.764 19:43:19 nvmf_abort_qd_sizes -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:26.764 19:43:20 nvmf_abort_qd_sizes -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:26.764 19:43:20 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:23:26.764 19:43:20 nvmf_abort_qd_sizes -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:23:26.764 19:43:20 nvmf_abort_qd_sizes -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:23:26.764 19:43:20 nvmf_abort_qd_sizes -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:23:26.764 19:43:20 nvmf_abort_qd_sizes -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:26.764 19:43:20 nvmf_abort_qd_sizes -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:26.764 19:43:20 nvmf_abort_qd_sizes -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:26.764 19:43:20 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:23:26.764 19:43:20 nvmf_abort_qd_sizes -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:23:26.764 19:43:20 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:23:26.764 19:43:20 nvmf_abort_qd_sizes -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:26.765 19:43:20 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:23:26.765 19:43:20 nvmf_abort_qd_sizes -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:23:26.765 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:26.765 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:23:26.765 00:23:26.765 --- 10.0.0.3 ping statistics --- 00:23:26.765 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:26.765 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:23:26.765 19:43:20 nvmf_abort_qd_sizes -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:23:26.765 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:23:26.765 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.061 ms 00:23:26.765 00:23:26.765 --- 10.0.0.4 ping statistics --- 00:23:26.765 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:26.765 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:23:26.765 19:43:20 nvmf_abort_qd_sizes -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:26.765 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:26.765 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:23:26.765 00:23:26.765 --- 10.0.0.1 ping statistics --- 00:23:26.765 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:26.765 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:23:26.765 19:43:20 nvmf_abort_qd_sizes -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:23:26.765 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:26.765 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:23:26.765 00:23:26.765 --- 10.0.0.2 ping statistics --- 00:23:26.765 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:26.765 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:23:26.765 19:43:20 nvmf_abort_qd_sizes -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:26.765 19:43:20 nvmf_abort_qd_sizes -- nvmf/common.sh@461 -- # return 0 00:23:26.765 19:43:20 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:23:26.765 19:43:20 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:23:27.333 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:27.723 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:23:27.723 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:23:27.723 19:43:20 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:27.723 19:43:20 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:27.723 19:43:20 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:27.723 19:43:20 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:27.723 19:43:20 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:27.723 19:43:20 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:27.723 19:43:20 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:23:27.723 19:43:20 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:27.723 19:43:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:27.723 19:43:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:23:27.723 19:43:21 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=84305 00:23:27.723 19:43:21 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:23:27.723 19:43:21 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 84305 00:23:27.723 19:43:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 84305 ']' 00:23:27.723 19:43:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:27.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:27.723 19:43:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:27.723 19:43:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:27.723 19:43:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:27.723 19:43:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:23:27.723 [2024-12-05 19:43:21.067925] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:23:27.723 [2024-12-05 19:43:21.068050] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:27.982 [2024-12-05 19:43:21.226640] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:27.982 [2024-12-05 19:43:21.295081] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:27.982 [2024-12-05 19:43:21.295404] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:27.982 [2024-12-05 19:43:21.295501] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:27.982 [2024-12-05 19:43:21.295596] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:27.982 [2024-12-05 19:43:21.295685] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:27.982 [2024-12-05 19:43:21.296976] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:27.982 [2024-12-05 19:43:21.297148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:27.982 [2024-12-05 19:43:21.297311] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:27.982 [2024-12-05 19:43:21.297365] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:27.982 [2024-12-05 19:43:21.354196] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:28.242 19:43:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:28.242 19:43:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:23:28.242 19:43:21 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:28.242 19:43:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:28.242 19:43:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:23:28.242 19:43:21 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:28.242 19:43:21 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:23:28.242 19:43:21 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:23:28.242 19:43:21 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:23:28.242 19:43:21 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:23:28.242 19:43:21 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:23:28.242 19:43:21 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n '' ]] 00:23:28.242 19:43:21 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:23:28.242 19:43:21 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:23:28.242 19:43:21 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # local bdf= 00:23:28.242 19:43:21 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:23:28.242 19:43:21 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # local class 00:23:28.242 19:43:21 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # local subclass 00:23:28.242 19:43:21 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # local progif 00:23:28.242 19:43:21 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # printf %02x 1 00:23:28.242 19:43:21 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # class=01 00:23:28.242 19:43:21 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # printf %02x 8 00:23:28.242 19:43:21 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # subclass=08 00:23:28.242 19:43:21 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # printf %02x 2 00:23:28.242 19:43:21 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # progif=02 00:23:28.242 19:43:21 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # hash lspci 00:23:28.242 19:43:21 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:23:28.242 19:43:21 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # lspci -mm -n -D 00:23:28.242 19:43:21 nvmf_abort_qd_sizes -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:23:28.242 19:43:21 nvmf_abort_qd_sizes -- scripts/common.sh@243 -- # grep -i -- -p02 00:23:28.242 19:43:21 nvmf_abort_qd_sizes -- scripts/common.sh@245 -- # tr -d '"' 00:23:28.242 19:43:21 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:23:28.242 19:43:21 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:23:28.242 19:43:21 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:23:28.242 19:43:21 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:23:28.242 19:43:21 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:23:28.242 19:43:21 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:23:28.242 19:43:21 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:23:28.242 19:43:21 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:23:28.242 19:43:21 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:23:28.242 19:43:21 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:23:28.242 19:43:21 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:23:28.242 19:43:21 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:23:28.242 19:43:21 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:23:28.242 19:43:21 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:23:28.242 19:43:21 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:23:28.242 19:43:21 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:23:28.242 19:43:21 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:23:28.242 19:43:21 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:23:28.242 19:43:21 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:23:28.242 19:43:21 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:23:28.242 19:43:21 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:23:28.242 19:43:21 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:23:28.242 19:43:21 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:23:28.242 19:43:21 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:23:28.242 19:43:21 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 2 )) 00:23:28.242 19:43:21 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:23:28.242 19:43:21 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:23:28.242 19:43:21 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:23:28.242 19:43:21 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:23:28.242 19:43:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:28.242 19:43:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:28.242 19:43:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:23:28.242 ************************************ 00:23:28.242 START TEST spdk_target_abort 00:23:28.242 ************************************ 00:23:28.242 19:43:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:23:28.242 19:43:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:23:28.242 19:43:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:23:28.242 19:43:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.242 19:43:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:23:28.242 spdk_targetn1 00:23:28.242 19:43:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.242 19:43:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:28.242 19:43:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.242 19:43:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:23:28.242 [2024-12-05 19:43:21.600360] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:28.242 19:43:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.242 19:43:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:23:28.242 19:43:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.242 19:43:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:23:28.242 19:43:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.242 19:43:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:23:28.242 19:43:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.242 19:43:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:23:28.242 19:43:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.242 19:43:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.3 -s 4420 00:23:28.242 19:43:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.243 19:43:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:23:28.243 [2024-12-05 19:43:21.648532] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:28.243 19:43:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.243 19:43:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.3 4420 nqn.2016-06.io.spdk:testnqn 00:23:28.243 19:43:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:23:28.243 19:43:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:23:28.243 19:43:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.3 00:23:28.243 19:43:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:23:28.243 19:43:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:23:28.243 19:43:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:23:28.243 19:43:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:23:28.243 19:43:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:23:28.243 19:43:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:28.243 19:43:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:23:28.243 19:43:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:28.243 19:43:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:23:28.243 19:43:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:28.243 19:43:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3' 00:23:28.243 19:43:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:28.243 19:43:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:23:28.243 19:43:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:28.243 19:43:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:28.243 19:43:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:23:28.243 19:43:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:31.535 Initializing NVMe Controllers 00:23:31.535 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:23:31.535 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:23:31.535 Initialization complete. Launching workers. 00:23:31.535 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 10502, failed: 0 00:23:31.535 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1026, failed to submit 9476 00:23:31.535 success 758, unsuccessful 268, failed 0 00:23:31.535 19:43:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:23:31.535 19:43:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:35.725 Initializing NVMe Controllers 00:23:35.725 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:23:35.725 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:23:35.725 Initialization complete. Launching workers. 00:23:35.725 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 9011, failed: 0 00:23:35.725 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1140, failed to submit 7871 00:23:35.725 success 418, unsuccessful 722, failed 0 00:23:35.725 19:43:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:23:35.725 19:43:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:38.260 Initializing NVMe Controllers 00:23:38.260 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:23:38.260 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:23:38.260 Initialization complete. Launching workers. 00:23:38.260 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31486, failed: 0 00:23:38.260 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2332, failed to submit 29154 00:23:38.260 success 476, unsuccessful 1856, failed 0 00:23:38.260 19:43:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:23:38.260 19:43:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.260 19:43:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:23:38.260 19:43:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.260 19:43:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:23:38.260 19:43:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.260 19:43:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:23:38.827 19:43:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.827 19:43:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 84305 00:23:38.827 19:43:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 84305 ']' 00:23:38.827 19:43:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 84305 00:23:38.827 19:43:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:23:38.827 19:43:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:38.827 19:43:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84305 00:23:38.827 killing process with pid 84305 00:23:38.827 19:43:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:38.827 19:43:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:38.827 19:43:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84305' 00:23:38.827 19:43:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 84305 00:23:38.827 19:43:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 84305 00:23:39.086 ************************************ 00:23:39.086 END TEST spdk_target_abort 00:23:39.086 ************************************ 00:23:39.086 00:23:39.086 real 0m10.832s 00:23:39.086 user 0m40.821s 00:23:39.086 sys 0m2.138s 00:23:39.086 19:43:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:39.086 19:43:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:23:39.086 19:43:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:23:39.086 19:43:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:39.086 19:43:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:39.086 19:43:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:23:39.086 ************************************ 00:23:39.086 START TEST kernel_target_abort 00:23:39.086 ************************************ 00:23:39.086 19:43:32 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:23:39.086 19:43:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:23:39.086 19:43:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:23:39.086 19:43:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:39.086 19:43:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:39.086 19:43:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:39.086 19:43:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:39.086 19:43:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:39.086 19:43:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:39.086 19:43:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:39.086 19:43:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:39.086 19:43:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:39.086 19:43:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:23:39.086 19:43:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:23:39.086 19:43:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:23:39.086 19:43:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:39.086 19:43:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:39.086 19:43:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:23:39.086 19:43:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:23:39.086 19:43:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:23:39.086 19:43:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:23:39.086 19:43:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:23:39.086 19:43:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:23:39.653 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:39.653 Waiting for block devices as requested 00:23:39.653 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:23:39.653 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:23:39.653 19:43:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:23:39.653 19:43:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:23:39.653 19:43:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:23:39.653 19:43:33 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:23:39.653 19:43:33 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:23:39.653 19:43:33 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:23:39.653 19:43:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:23:39.653 19:43:33 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:23:39.653 19:43:33 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:23:39.653 No valid GPT data, bailing 00:23:39.912 19:43:33 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:23:39.912 19:43:33 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:23:39.912 19:43:33 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:23:39.912 19:43:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:23:39.912 19:43:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:23:39.912 19:43:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:23:39.912 19:43:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:23:39.912 19:43:33 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:23:39.912 19:43:33 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:23:39.912 19:43:33 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:23:39.912 19:43:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:23:39.912 19:43:33 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:23:39.912 19:43:33 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:23:39.912 No valid GPT data, bailing 00:23:39.912 19:43:33 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:23:39.912 19:43:33 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:23:39.912 19:43:33 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:23:39.912 19:43:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:23:39.912 19:43:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:23:39.912 19:43:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:23:39.912 19:43:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:23:39.912 19:43:33 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:23:39.912 19:43:33 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:23:39.912 19:43:33 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:23:39.912 19:43:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:23:39.912 19:43:33 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:23:39.912 19:43:33 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:23:39.912 No valid GPT data, bailing 00:23:39.912 19:43:33 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:23:39.912 19:43:33 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:23:39.912 19:43:33 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:23:39.912 19:43:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:23:39.912 19:43:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:23:39.912 19:43:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:23:39.912 19:43:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:23:39.912 19:43:33 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:23:39.912 19:43:33 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:23:39.912 19:43:33 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:23:39.912 19:43:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:23:39.912 19:43:33 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:23:39.912 19:43:33 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:23:39.912 No valid GPT data, bailing 00:23:39.912 19:43:33 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:23:39.912 19:43:33 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:23:39.912 19:43:33 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:23:39.912 19:43:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:23:39.912 19:43:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:23:39.912 19:43:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:39.912 19:43:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:39.912 19:43:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:23:40.171 19:43:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:23:40.171 19:43:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:23:40.171 19:43:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:23:40.171 19:43:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:23:40.171 19:43:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:23:40.171 19:43:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:23:40.171 19:43:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:23:40.171 19:43:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:23:40.171 19:43:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:23:40.171 19:43:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 --hostid=bf08626d-6d1d-4cb1-9877-bff179cd6dc1 -a 10.0.0.1 -t tcp -s 4420 00:23:40.171 00:23:40.171 Discovery Log Number of Records 2, Generation counter 2 00:23:40.171 =====Discovery Log Entry 0====== 00:23:40.171 trtype: tcp 00:23:40.171 adrfam: ipv4 00:23:40.171 subtype: current discovery subsystem 00:23:40.171 treq: not specified, sq flow control disable supported 00:23:40.171 portid: 1 00:23:40.171 trsvcid: 4420 00:23:40.171 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:23:40.171 traddr: 10.0.0.1 00:23:40.171 eflags: none 00:23:40.171 sectype: none 00:23:40.171 =====Discovery Log Entry 1====== 00:23:40.171 trtype: tcp 00:23:40.171 adrfam: ipv4 00:23:40.171 subtype: nvme subsystem 00:23:40.171 treq: not specified, sq flow control disable supported 00:23:40.171 portid: 1 00:23:40.171 trsvcid: 4420 00:23:40.171 subnqn: nqn.2016-06.io.spdk:testnqn 00:23:40.171 traddr: 10.0.0.1 00:23:40.171 eflags: none 00:23:40.171 sectype: none 00:23:40.171 19:43:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:23:40.171 19:43:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:23:40.171 19:43:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:23:40.171 19:43:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:23:40.171 19:43:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:23:40.171 19:43:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:23:40.171 19:43:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:23:40.172 19:43:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:23:40.172 19:43:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:23:40.172 19:43:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:40.172 19:43:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:23:40.172 19:43:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:40.172 19:43:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:23:40.172 19:43:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:40.172 19:43:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:23:40.172 19:43:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:40.172 19:43:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:23:40.172 19:43:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:40.172 19:43:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:40.172 19:43:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:23:40.172 19:43:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:43.455 Initializing NVMe Controllers 00:23:43.455 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:23:43.455 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:23:43.455 Initialization complete. Launching workers. 00:23:43.455 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 32089, failed: 0 00:23:43.455 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 32089, failed to submit 0 00:23:43.455 success 0, unsuccessful 32089, failed 0 00:23:43.455 19:43:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:23:43.455 19:43:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:46.743 Initializing NVMe Controllers 00:23:46.743 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:23:46.743 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:23:46.743 Initialization complete. Launching workers. 00:23:46.743 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 69690, failed: 0 00:23:46.743 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 30555, failed to submit 39135 00:23:46.743 success 0, unsuccessful 30555, failed 0 00:23:46.743 19:43:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:23:46.743 19:43:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:50.024 Initializing NVMe Controllers 00:23:50.024 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:23:50.024 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:23:50.024 Initialization complete. Launching workers. 00:23:50.024 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 84146, failed: 0 00:23:50.024 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 21030, failed to submit 63116 00:23:50.024 success 0, unsuccessful 21030, failed 0 00:23:50.024 19:43:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:23:50.024 19:43:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:23:50.024 19:43:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:23:50.024 19:43:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:50.024 19:43:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:50.024 19:43:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:23:50.024 19:43:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:50.024 19:43:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:23:50.024 19:43:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:23:50.024 19:43:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:23:50.282 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:52.839 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:23:52.839 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:23:52.839 ************************************ 00:23:52.839 END TEST kernel_target_abort 00:23:52.839 ************************************ 00:23:52.839 00:23:52.839 real 0m13.380s 00:23:52.839 user 0m6.366s 00:23:52.839 sys 0m4.523s 00:23:52.839 19:43:45 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:52.839 19:43:45 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:23:52.839 19:43:45 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:23:52.839 19:43:45 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:23:52.839 19:43:45 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:52.839 19:43:45 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:23:52.839 19:43:45 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:52.839 19:43:45 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:23:52.839 19:43:45 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:52.839 19:43:45 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:52.839 rmmod nvme_tcp 00:23:52.839 rmmod nvme_fabrics 00:23:52.839 rmmod nvme_keyring 00:23:52.839 19:43:45 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:52.839 19:43:45 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:23:52.839 19:43:45 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:23:52.839 19:43:45 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 84305 ']' 00:23:52.839 19:43:45 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 84305 00:23:52.839 19:43:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 84305 ']' 00:23:52.839 19:43:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 84305 00:23:52.839 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (84305) - No such process 00:23:52.839 Process with pid 84305 is not found 00:23:52.839 19:43:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 84305 is not found' 00:23:52.839 19:43:45 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:23:52.839 19:43:45 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:23:52.839 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:53.098 Waiting for block devices as requested 00:23:53.098 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:23:53.098 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:23:53.098 19:43:46 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:53.098 19:43:46 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:53.098 19:43:46 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:23:53.098 19:43:46 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:23:53.098 19:43:46 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:53.098 19:43:46 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:23:53.098 19:43:46 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:53.098 19:43:46 nvmf_abort_qd_sizes -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:23:53.098 19:43:46 nvmf_abort_qd_sizes -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:23:53.098 19:43:46 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:23:53.358 19:43:46 nvmf_abort_qd_sizes -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:23:53.358 19:43:46 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:23:53.358 19:43:46 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:23:53.358 19:43:46 nvmf_abort_qd_sizes -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:23:53.358 19:43:46 nvmf_abort_qd_sizes -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:23:53.358 19:43:46 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:23:53.358 19:43:46 nvmf_abort_qd_sizes -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:23:53.358 19:43:46 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:23:53.358 19:43:46 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:23:53.358 19:43:46 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:53.358 19:43:46 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:53.358 19:43:46 nvmf_abort_qd_sizes -- nvmf/common.sh@246 -- # remove_spdk_ns 00:23:53.358 19:43:46 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:53.358 19:43:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:23:53.358 19:43:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:53.358 19:43:46 nvmf_abort_qd_sizes -- nvmf/common.sh@300 -- # return 0 00:23:53.358 00:23:53.358 real 0m27.230s 00:23:53.358 user 0m48.370s 00:23:53.358 sys 0m8.064s 00:23:53.358 19:43:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:53.358 19:43:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:23:53.358 ************************************ 00:23:53.358 END TEST nvmf_abort_qd_sizes 00:23:53.358 ************************************ 00:23:53.358 19:43:46 -- spdk/autotest.sh@292 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:23:53.359 19:43:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:53.359 19:43:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:53.359 19:43:46 -- common/autotest_common.sh@10 -- # set +x 00:23:53.618 ************************************ 00:23:53.618 START TEST keyring_file 00:23:53.618 ************************************ 00:23:53.618 19:43:46 keyring_file -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:23:53.618 * Looking for test storage... 00:23:53.618 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:23:53.618 19:43:46 keyring_file -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:53.618 19:43:46 keyring_file -- common/autotest_common.sh@1711 -- # lcov --version 00:23:53.618 19:43:46 keyring_file -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:53.618 19:43:46 keyring_file -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:53.618 19:43:46 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:53.618 19:43:46 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:53.618 19:43:46 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:53.618 19:43:46 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:23:53.618 19:43:46 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:23:53.618 19:43:46 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:23:53.618 19:43:46 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:23:53.618 19:43:46 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:23:53.618 19:43:46 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:23:53.618 19:43:46 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:23:53.618 19:43:46 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:53.618 19:43:46 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:23:53.618 19:43:46 keyring_file -- scripts/common.sh@345 -- # : 1 00:23:53.618 19:43:46 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:53.618 19:43:46 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:53.618 19:43:46 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:23:53.618 19:43:46 keyring_file -- scripts/common.sh@353 -- # local d=1 00:23:53.618 19:43:46 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:53.618 19:43:46 keyring_file -- scripts/common.sh@355 -- # echo 1 00:23:53.618 19:43:46 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:23:53.618 19:43:46 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:23:53.618 19:43:46 keyring_file -- scripts/common.sh@353 -- # local d=2 00:23:53.618 19:43:46 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:53.618 19:43:46 keyring_file -- scripts/common.sh@355 -- # echo 2 00:23:53.618 19:43:46 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:23:53.618 19:43:46 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:53.618 19:43:46 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:53.618 19:43:46 keyring_file -- scripts/common.sh@368 -- # return 0 00:23:53.618 19:43:46 keyring_file -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:53.618 19:43:46 keyring_file -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:53.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:53.618 --rc genhtml_branch_coverage=1 00:23:53.618 --rc genhtml_function_coverage=1 00:23:53.618 --rc genhtml_legend=1 00:23:53.618 --rc geninfo_all_blocks=1 00:23:53.618 --rc geninfo_unexecuted_blocks=1 00:23:53.618 00:23:53.618 ' 00:23:53.618 19:43:46 keyring_file -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:53.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:53.618 --rc genhtml_branch_coverage=1 00:23:53.618 --rc genhtml_function_coverage=1 00:23:53.618 --rc genhtml_legend=1 00:23:53.618 --rc geninfo_all_blocks=1 00:23:53.618 --rc geninfo_unexecuted_blocks=1 00:23:53.618 00:23:53.618 ' 00:23:53.618 19:43:46 keyring_file -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:53.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:53.618 --rc genhtml_branch_coverage=1 00:23:53.618 --rc genhtml_function_coverage=1 00:23:53.618 --rc genhtml_legend=1 00:23:53.618 --rc geninfo_all_blocks=1 00:23:53.618 --rc geninfo_unexecuted_blocks=1 00:23:53.618 00:23:53.618 ' 00:23:53.618 19:43:46 keyring_file -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:53.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:53.618 --rc genhtml_branch_coverage=1 00:23:53.618 --rc genhtml_function_coverage=1 00:23:53.618 --rc genhtml_legend=1 00:23:53.618 --rc geninfo_all_blocks=1 00:23:53.618 --rc geninfo_unexecuted_blocks=1 00:23:53.618 00:23:53.618 ' 00:23:53.618 19:43:46 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:23:53.618 19:43:46 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:53.618 19:43:46 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:23:53.618 19:43:46 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:53.618 19:43:46 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:53.618 19:43:46 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:53.618 19:43:46 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:53.618 19:43:46 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:53.619 19:43:46 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:53.619 19:43:46 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:53.619 19:43:46 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:53.619 19:43:46 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:53.619 19:43:46 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:53.619 19:43:47 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:23:53.619 19:43:47 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:23:53.619 19:43:47 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:53.619 19:43:47 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:53.619 19:43:47 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:53.619 19:43:47 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:53.619 19:43:47 keyring_file -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:53.619 19:43:47 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:23:53.619 19:43:47 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:53.619 19:43:47 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:53.619 19:43:47 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:53.619 19:43:47 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:53.619 19:43:47 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:53.619 19:43:47 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:53.619 19:43:47 keyring_file -- paths/export.sh@5 -- # export PATH 00:23:53.619 19:43:47 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:53.619 19:43:47 keyring_file -- nvmf/common.sh@51 -- # : 0 00:23:53.619 19:43:47 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:53.619 19:43:47 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:53.619 19:43:47 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:53.619 19:43:47 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:53.619 19:43:47 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:53.619 19:43:47 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:53.619 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:53.619 19:43:47 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:53.619 19:43:47 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:53.619 19:43:47 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:53.619 19:43:47 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:23:53.619 19:43:47 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:23:53.619 19:43:47 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:23:53.619 19:43:47 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:23:53.619 19:43:47 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:23:53.619 19:43:47 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:23:53.619 19:43:47 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:23:53.619 19:43:47 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:23:53.619 19:43:47 keyring_file -- keyring/common.sh@17 -- # name=key0 00:23:53.619 19:43:47 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:23:53.619 19:43:47 keyring_file -- keyring/common.sh@17 -- # digest=0 00:23:53.619 19:43:47 keyring_file -- keyring/common.sh@18 -- # mktemp 00:23:53.619 19:43:47 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.VmA17CXSah 00:23:53.619 19:43:47 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:23:53.619 19:43:47 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:23:53.619 19:43:47 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:23:53.619 19:43:47 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:23:53.619 19:43:47 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:23:53.619 19:43:47 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:23:53.619 19:43:47 keyring_file -- nvmf/common.sh@733 -- # python - 00:23:53.878 19:43:47 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.VmA17CXSah 00:23:53.878 19:43:47 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.VmA17CXSah 00:23:53.878 19:43:47 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.VmA17CXSah 00:23:53.878 19:43:47 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:23:53.878 19:43:47 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:23:53.878 19:43:47 keyring_file -- keyring/common.sh@17 -- # name=key1 00:23:53.878 19:43:47 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:23:53.878 19:43:47 keyring_file -- keyring/common.sh@17 -- # digest=0 00:23:53.878 19:43:47 keyring_file -- keyring/common.sh@18 -- # mktemp 00:23:53.878 19:43:47 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.6z055g7gJb 00:23:53.878 19:43:47 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:23:53.878 19:43:47 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:23:53.878 19:43:47 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:23:53.878 19:43:47 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:23:53.878 19:43:47 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:23:53.878 19:43:47 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:23:53.878 19:43:47 keyring_file -- nvmf/common.sh@733 -- # python - 00:23:53.878 19:43:47 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.6z055g7gJb 00:23:53.878 19:43:47 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.6z055g7gJb 00:23:53.878 19:43:47 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.6z055g7gJb 00:23:53.878 19:43:47 keyring_file -- keyring/file.sh@30 -- # tgtpid=85218 00:23:53.878 19:43:47 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:53.878 19:43:47 keyring_file -- keyring/file.sh@32 -- # waitforlisten 85218 00:23:53.878 19:43:47 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 85218 ']' 00:23:53.878 19:43:47 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:53.878 19:43:47 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:53.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:53.878 19:43:47 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:53.878 19:43:47 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:53.878 19:43:47 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:23:53.878 [2024-12-05 19:43:47.213076] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:23:53.878 [2024-12-05 19:43:47.213229] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85218 ] 00:23:54.137 [2024-12-05 19:43:47.362829] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:54.137 [2024-12-05 19:43:47.425679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:54.137 [2024-12-05 19:43:47.499553] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:54.396 19:43:47 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:54.396 19:43:47 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:23:54.396 19:43:47 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:23:54.396 19:43:47 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.396 19:43:47 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:23:54.396 [2024-12-05 19:43:47.713696] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:54.396 null0 00:23:54.396 [2024-12-05 19:43:47.745665] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:54.396 [2024-12-05 19:43:47.745895] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:23:54.396 19:43:47 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.396 19:43:47 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:23:54.396 19:43:47 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:23:54.396 19:43:47 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:23:54.396 19:43:47 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:54.396 19:43:47 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:54.396 19:43:47 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:54.396 19:43:47 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:54.396 19:43:47 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:23:54.396 19:43:47 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.396 19:43:47 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:23:54.396 [2024-12-05 19:43:47.773638] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:23:54.396 request: 00:23:54.396 { 00:23:54.396 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:23:54.396 "secure_channel": false, 00:23:54.396 "listen_address": { 00:23:54.396 "trtype": "tcp", 00:23:54.396 "traddr": "127.0.0.1", 00:23:54.396 "trsvcid": "4420" 00:23:54.396 }, 00:23:54.396 "method": "nvmf_subsystem_add_listener", 00:23:54.396 "req_id": 1 00:23:54.396 } 00:23:54.396 Got JSON-RPC error response 00:23:54.396 response: 00:23:54.396 { 00:23:54.396 "code": -32602, 00:23:54.396 "message": "Invalid parameters" 00:23:54.396 } 00:23:54.396 19:43:47 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:54.396 19:43:47 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:23:54.396 19:43:47 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:54.396 19:43:47 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:54.396 19:43:47 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:54.396 19:43:47 keyring_file -- keyring/file.sh@47 -- # bperfpid=85228 00:23:54.396 19:43:47 keyring_file -- keyring/file.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:23:54.396 19:43:47 keyring_file -- keyring/file.sh@49 -- # waitforlisten 85228 /var/tmp/bperf.sock 00:23:54.396 19:43:47 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 85228 ']' 00:23:54.396 19:43:47 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:54.396 19:43:47 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:54.396 19:43:47 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:54.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:54.396 19:43:47 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:54.396 19:43:47 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:23:54.656 [2024-12-05 19:43:47.838514] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:23:54.656 [2024-12-05 19:43:47.838858] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85228 ] 00:23:54.656 [2024-12-05 19:43:47.992103] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:54.656 [2024-12-05 19:43:48.054512] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:54.915 [2024-12-05 19:43:48.114944] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:54.915 19:43:48 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:54.915 19:43:48 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:23:54.915 19:43:48 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.VmA17CXSah 00:23:54.915 19:43:48 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.VmA17CXSah 00:23:55.175 19:43:48 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.6z055g7gJb 00:23:55.175 19:43:48 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.6z055g7gJb 00:23:55.433 19:43:48 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:23:55.433 19:43:48 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:23:55.433 19:43:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:55.433 19:43:48 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:55.433 19:43:48 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:55.693 19:43:49 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.VmA17CXSah == \/\t\m\p\/\t\m\p\.\V\m\A\1\7\C\X\S\a\h ]] 00:23:55.693 19:43:49 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:23:55.693 19:43:49 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:23:55.693 19:43:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:23:55.693 19:43:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:55.693 19:43:49 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:55.952 19:43:49 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.6z055g7gJb == \/\t\m\p\/\t\m\p\.\6\z\0\5\5\g\7\g\J\b ]] 00:23:55.952 19:43:49 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:23:55.952 19:43:49 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:55.952 19:43:49 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:23:55.952 19:43:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:55.952 19:43:49 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:55.952 19:43:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:56.212 19:43:49 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:23:56.212 19:43:49 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:23:56.212 19:43:49 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:56.212 19:43:49 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:23:56.212 19:43:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:56.212 19:43:49 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:56.212 19:43:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:23:56.471 19:43:49 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:23:56.471 19:43:49 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:56.471 19:43:49 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:56.730 [2024-12-05 19:43:50.152989] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:57.002 nvme0n1 00:23:57.002 19:43:50 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:23:57.002 19:43:50 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:23:57.002 19:43:50 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:57.002 19:43:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:57.002 19:43:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:57.002 19:43:50 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:57.261 19:43:50 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:23:57.261 19:43:50 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:23:57.261 19:43:50 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:57.261 19:43:50 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:23:57.261 19:43:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:57.261 19:43:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:23:57.261 19:43:50 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:57.520 19:43:50 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:23:57.520 19:43:50 keyring_file -- keyring/file.sh@63 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:57.520 Running I/O for 1 seconds... 00:23:58.906 12016.00 IOPS, 46.94 MiB/s 00:23:58.906 Latency(us) 00:23:58.906 [2024-12-05T19:43:52.343Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:58.906 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:23:58.906 nvme0n1 : 1.01 12063.51 47.12 0.00 0.00 10580.23 4796.04 49569.05 00:23:58.906 [2024-12-05T19:43:52.343Z] =================================================================================================================== 00:23:58.906 [2024-12-05T19:43:52.343Z] Total : 12063.51 47.12 0.00 0.00 10580.23 4796.04 49569.05 00:23:58.906 { 00:23:58.906 "results": [ 00:23:58.906 { 00:23:58.906 "job": "nvme0n1", 00:23:58.906 "core_mask": "0x2", 00:23:58.906 "workload": "randrw", 00:23:58.906 "percentage": 50, 00:23:58.906 "status": "finished", 00:23:58.906 "queue_depth": 128, 00:23:58.906 "io_size": 4096, 00:23:58.906 "runtime": 1.006838, 00:23:58.906 "iops": 12063.509720531008, 00:23:58.906 "mibps": 47.12308484582425, 00:23:58.906 "io_failed": 0, 00:23:58.906 "io_timeout": 0, 00:23:58.906 "avg_latency_us": 10580.229874706225, 00:23:58.906 "min_latency_us": 4796.043636363636, 00:23:58.906 "max_latency_us": 49569.04727272727 00:23:58.906 } 00:23:58.906 ], 00:23:58.906 "core_count": 1 00:23:58.906 } 00:23:58.906 19:43:51 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:23:58.906 19:43:51 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:23:58.906 19:43:52 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:23:58.906 19:43:52 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:23:58.906 19:43:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:58.906 19:43:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:58.906 19:43:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:58.906 19:43:52 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:59.165 19:43:52 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:23:59.165 19:43:52 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:23:59.165 19:43:52 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:23:59.165 19:43:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:59.165 19:43:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:59.165 19:43:52 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:59.165 19:43:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:23:59.423 19:43:52 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:23:59.423 19:43:52 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:23:59.423 19:43:52 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:23:59.423 19:43:52 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:23:59.423 19:43:52 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:23:59.423 19:43:52 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:59.423 19:43:52 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:23:59.424 19:43:52 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:59.424 19:43:52 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:23:59.424 19:43:52 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:23:59.682 [2024-12-05 19:43:53.081245] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:59.682 [2024-12-05 19:43:53.081473] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ebe5d0 (107): Transport endpoint is not connected 00:23:59.682 [2024-12-05 19:43:53.082459] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ebe5d0 (9): Bad file descriptor 00:23:59.682 [2024-12-05 19:43:53.083467] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:23:59.682 [2024-12-05 19:43:53.083493] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:23:59.682 [2024-12-05 19:43:53.083522] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:23:59.682 [2024-12-05 19:43:53.083534] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:23:59.682 request: 00:23:59.682 { 00:23:59.682 "name": "nvme0", 00:23:59.682 "trtype": "tcp", 00:23:59.682 "traddr": "127.0.0.1", 00:23:59.682 "adrfam": "ipv4", 00:23:59.682 "trsvcid": "4420", 00:23:59.682 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:59.682 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:59.682 "prchk_reftag": false, 00:23:59.682 "prchk_guard": false, 00:23:59.682 "hdgst": false, 00:23:59.682 "ddgst": false, 00:23:59.682 "psk": "key1", 00:23:59.682 "allow_unrecognized_csi": false, 00:23:59.682 "method": "bdev_nvme_attach_controller", 00:23:59.682 "req_id": 1 00:23:59.682 } 00:23:59.682 Got JSON-RPC error response 00:23:59.682 response: 00:23:59.682 { 00:23:59.682 "code": -5, 00:23:59.682 "message": "Input/output error" 00:23:59.682 } 00:23:59.682 19:43:53 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:23:59.682 19:43:53 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:59.682 19:43:53 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:59.682 19:43:53 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:59.682 19:43:53 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:23:59.683 19:43:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:59.683 19:43:53 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:23:59.683 19:43:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:59.683 19:43:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:59.683 19:43:53 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:00.250 19:43:53 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:24:00.250 19:43:53 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:24:00.250 19:43:53 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:24:00.250 19:43:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:00.250 19:43:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:00.250 19:43:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:00.250 19:43:53 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:00.250 19:43:53 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:24:00.250 19:43:53 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:24:00.250 19:43:53 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:24:00.509 19:43:53 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:24:00.509 19:43:53 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:24:00.768 19:43:54 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:24:00.768 19:43:54 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:00.768 19:43:54 keyring_file -- keyring/file.sh@78 -- # jq length 00:24:01.027 19:43:54 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:24:01.027 19:43:54 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.VmA17CXSah 00:24:01.027 19:43:54 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.VmA17CXSah 00:24:01.027 19:43:54 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:24:01.027 19:43:54 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.VmA17CXSah 00:24:01.027 19:43:54 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:24:01.027 19:43:54 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:01.027 19:43:54 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:24:01.027 19:43:54 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:01.027 19:43:54 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.VmA17CXSah 00:24:01.027 19:43:54 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.VmA17CXSah 00:24:01.285 [2024-12-05 19:43:54.621845] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.VmA17CXSah': 0100660 00:24:01.285 [2024-12-05 19:43:54.621900] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:24:01.285 request: 00:24:01.285 { 00:24:01.285 "name": "key0", 00:24:01.285 "path": "/tmp/tmp.VmA17CXSah", 00:24:01.285 "method": "keyring_file_add_key", 00:24:01.285 "req_id": 1 00:24:01.285 } 00:24:01.285 Got JSON-RPC error response 00:24:01.285 response: 00:24:01.285 { 00:24:01.285 "code": -1, 00:24:01.285 "message": "Operation not permitted" 00:24:01.285 } 00:24:01.285 19:43:54 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:24:01.285 19:43:54 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:01.285 19:43:54 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:01.285 19:43:54 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:01.285 19:43:54 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.VmA17CXSah 00:24:01.285 19:43:54 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.VmA17CXSah 00:24:01.285 19:43:54 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.VmA17CXSah 00:24:01.544 19:43:54 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.VmA17CXSah 00:24:01.545 19:43:54 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:24:01.545 19:43:54 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:01.545 19:43:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:01.545 19:43:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:01.545 19:43:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:01.545 19:43:54 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:01.805 19:43:55 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:24:01.805 19:43:55 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:01.805 19:43:55 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:24:01.805 19:43:55 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:01.805 19:43:55 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:24:01.805 19:43:55 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:01.805 19:43:55 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:24:01.805 19:43:55 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:01.805 19:43:55 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:01.805 19:43:55 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:02.064 [2024-12-05 19:43:55.398012] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.VmA17CXSah': No such file or directory 00:24:02.064 [2024-12-05 19:43:55.398061] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:24:02.064 [2024-12-05 19:43:55.398097] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:24:02.064 [2024-12-05 19:43:55.398107] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:24:02.064 [2024-12-05 19:43:55.398116] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:02.064 [2024-12-05 19:43:55.398124] bdev_nvme.c:6796:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:24:02.064 request: 00:24:02.064 { 00:24:02.064 "name": "nvme0", 00:24:02.064 "trtype": "tcp", 00:24:02.064 "traddr": "127.0.0.1", 00:24:02.064 "adrfam": "ipv4", 00:24:02.064 "trsvcid": "4420", 00:24:02.064 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:02.064 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:02.064 "prchk_reftag": false, 00:24:02.064 "prchk_guard": false, 00:24:02.064 "hdgst": false, 00:24:02.064 "ddgst": false, 00:24:02.064 "psk": "key0", 00:24:02.064 "allow_unrecognized_csi": false, 00:24:02.064 "method": "bdev_nvme_attach_controller", 00:24:02.064 "req_id": 1 00:24:02.064 } 00:24:02.064 Got JSON-RPC error response 00:24:02.064 response: 00:24:02.064 { 00:24:02.064 "code": -19, 00:24:02.064 "message": "No such device" 00:24:02.064 } 00:24:02.064 19:43:55 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:24:02.064 19:43:55 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:02.064 19:43:55 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:02.064 19:43:55 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:02.064 19:43:55 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:24:02.064 19:43:55 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:24:02.323 19:43:55 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:24:02.323 19:43:55 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:24:02.323 19:43:55 keyring_file -- keyring/common.sh@17 -- # name=key0 00:24:02.323 19:43:55 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:24:02.323 19:43:55 keyring_file -- keyring/common.sh@17 -- # digest=0 00:24:02.323 19:43:55 keyring_file -- keyring/common.sh@18 -- # mktemp 00:24:02.323 19:43:55 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.BsL6onWSmb 00:24:02.323 19:43:55 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:24:02.323 19:43:55 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:24:02.323 19:43:55 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:24:02.323 19:43:55 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:24:02.323 19:43:55 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:24:02.323 19:43:55 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:24:02.323 19:43:55 keyring_file -- nvmf/common.sh@733 -- # python - 00:24:02.323 19:43:55 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.BsL6onWSmb 00:24:02.323 19:43:55 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.BsL6onWSmb 00:24:02.323 19:43:55 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.BsL6onWSmb 00:24:02.323 19:43:55 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.BsL6onWSmb 00:24:02.323 19:43:55 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.BsL6onWSmb 00:24:02.890 19:43:56 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:02.890 19:43:56 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:03.149 nvme0n1 00:24:03.149 19:43:56 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:24:03.149 19:43:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:03.149 19:43:56 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:03.149 19:43:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:03.149 19:43:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:03.149 19:43:56 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:03.408 19:43:56 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:24:03.408 19:43:56 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:24:03.408 19:43:56 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:24:03.666 19:43:56 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:24:03.666 19:43:56 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:24:03.666 19:43:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:03.666 19:43:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:03.666 19:43:56 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:03.666 19:43:57 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:24:03.666 19:43:57 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:24:03.666 19:43:57 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:03.666 19:43:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:03.666 19:43:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:03.666 19:43:57 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:03.666 19:43:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:04.233 19:43:57 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:24:04.233 19:43:57 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:24:04.233 19:43:57 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:24:04.233 19:43:57 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:24:04.233 19:43:57 keyring_file -- keyring/file.sh@105 -- # jq length 00:24:04.233 19:43:57 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:04.801 19:43:57 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:24:04.801 19:43:57 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.BsL6onWSmb 00:24:04.801 19:43:57 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.BsL6onWSmb 00:24:04.801 19:43:58 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.6z055g7gJb 00:24:04.801 19:43:58 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.6z055g7gJb 00:24:05.059 19:43:58 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:05.059 19:43:58 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:05.625 nvme0n1 00:24:05.625 19:43:58 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:24:05.625 19:43:58 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:24:05.882 19:43:59 keyring_file -- keyring/file.sh@113 -- # config='{ 00:24:05.882 "subsystems": [ 00:24:05.882 { 00:24:05.882 "subsystem": "keyring", 00:24:05.882 "config": [ 00:24:05.882 { 00:24:05.882 "method": "keyring_file_add_key", 00:24:05.882 "params": { 00:24:05.882 "name": "key0", 00:24:05.882 "path": "/tmp/tmp.BsL6onWSmb" 00:24:05.882 } 00:24:05.882 }, 00:24:05.882 { 00:24:05.882 "method": "keyring_file_add_key", 00:24:05.882 "params": { 00:24:05.882 "name": "key1", 00:24:05.882 "path": "/tmp/tmp.6z055g7gJb" 00:24:05.882 } 00:24:05.882 } 00:24:05.882 ] 00:24:05.882 }, 00:24:05.882 { 00:24:05.882 "subsystem": "iobuf", 00:24:05.882 "config": [ 00:24:05.882 { 00:24:05.882 "method": "iobuf_set_options", 00:24:05.882 "params": { 00:24:05.882 "small_pool_count": 8192, 00:24:05.882 "large_pool_count": 1024, 00:24:05.882 "small_bufsize": 8192, 00:24:05.882 "large_bufsize": 135168, 00:24:05.882 "enable_numa": false 00:24:05.882 } 00:24:05.883 } 00:24:05.883 ] 00:24:05.883 }, 00:24:05.883 { 00:24:05.883 "subsystem": "sock", 00:24:05.883 "config": [ 00:24:05.883 { 00:24:05.883 "method": "sock_set_default_impl", 00:24:05.883 "params": { 00:24:05.883 "impl_name": "uring" 00:24:05.883 } 00:24:05.883 }, 00:24:05.883 { 00:24:05.883 "method": "sock_impl_set_options", 00:24:05.883 "params": { 00:24:05.883 "impl_name": "ssl", 00:24:05.883 "recv_buf_size": 4096, 00:24:05.883 "send_buf_size": 4096, 00:24:05.883 "enable_recv_pipe": true, 00:24:05.883 "enable_quickack": false, 00:24:05.883 "enable_placement_id": 0, 00:24:05.883 "enable_zerocopy_send_server": true, 00:24:05.883 "enable_zerocopy_send_client": false, 00:24:05.883 "zerocopy_threshold": 0, 00:24:05.883 "tls_version": 0, 00:24:05.883 "enable_ktls": false 00:24:05.883 } 00:24:05.883 }, 00:24:05.883 { 00:24:05.883 "method": "sock_impl_set_options", 00:24:05.883 "params": { 00:24:05.883 "impl_name": "posix", 00:24:05.883 "recv_buf_size": 2097152, 00:24:05.883 "send_buf_size": 2097152, 00:24:05.883 "enable_recv_pipe": true, 00:24:05.883 "enable_quickack": false, 00:24:05.883 "enable_placement_id": 0, 00:24:05.883 "enable_zerocopy_send_server": true, 00:24:05.883 "enable_zerocopy_send_client": false, 00:24:05.883 "zerocopy_threshold": 0, 00:24:05.883 "tls_version": 0, 00:24:05.883 "enable_ktls": false 00:24:05.883 } 00:24:05.883 }, 00:24:05.883 { 00:24:05.883 "method": "sock_impl_set_options", 00:24:05.883 "params": { 00:24:05.883 "impl_name": "uring", 00:24:05.883 "recv_buf_size": 2097152, 00:24:05.883 "send_buf_size": 2097152, 00:24:05.883 "enable_recv_pipe": true, 00:24:05.883 "enable_quickack": false, 00:24:05.883 "enable_placement_id": 0, 00:24:05.883 "enable_zerocopy_send_server": false, 00:24:05.883 "enable_zerocopy_send_client": false, 00:24:05.883 "zerocopy_threshold": 0, 00:24:05.883 "tls_version": 0, 00:24:05.883 "enable_ktls": false 00:24:05.883 } 00:24:05.883 } 00:24:05.883 ] 00:24:05.883 }, 00:24:05.883 { 00:24:05.883 "subsystem": "vmd", 00:24:05.883 "config": [] 00:24:05.883 }, 00:24:05.883 { 00:24:05.883 "subsystem": "accel", 00:24:05.883 "config": [ 00:24:05.883 { 00:24:05.883 "method": "accel_set_options", 00:24:05.883 "params": { 00:24:05.883 "small_cache_size": 128, 00:24:05.883 "large_cache_size": 16, 00:24:05.883 "task_count": 2048, 00:24:05.883 "sequence_count": 2048, 00:24:05.883 "buf_count": 2048 00:24:05.883 } 00:24:05.883 } 00:24:05.883 ] 00:24:05.883 }, 00:24:05.883 { 00:24:05.883 "subsystem": "bdev", 00:24:05.883 "config": [ 00:24:05.883 { 00:24:05.883 "method": "bdev_set_options", 00:24:05.883 "params": { 00:24:05.883 "bdev_io_pool_size": 65535, 00:24:05.883 "bdev_io_cache_size": 256, 00:24:05.883 "bdev_auto_examine": true, 00:24:05.883 "iobuf_small_cache_size": 128, 00:24:05.883 "iobuf_large_cache_size": 16 00:24:05.883 } 00:24:05.883 }, 00:24:05.883 { 00:24:05.883 "method": "bdev_raid_set_options", 00:24:05.883 "params": { 00:24:05.883 "process_window_size_kb": 1024, 00:24:05.883 "process_max_bandwidth_mb_sec": 0 00:24:05.883 } 00:24:05.883 }, 00:24:05.883 { 00:24:05.883 "method": "bdev_iscsi_set_options", 00:24:05.883 "params": { 00:24:05.883 "timeout_sec": 30 00:24:05.883 } 00:24:05.883 }, 00:24:05.883 { 00:24:05.883 "method": "bdev_nvme_set_options", 00:24:05.883 "params": { 00:24:05.883 "action_on_timeout": "none", 00:24:05.883 "timeout_us": 0, 00:24:05.883 "timeout_admin_us": 0, 00:24:05.883 "keep_alive_timeout_ms": 10000, 00:24:05.883 "arbitration_burst": 0, 00:24:05.883 "low_priority_weight": 0, 00:24:05.883 "medium_priority_weight": 0, 00:24:05.883 "high_priority_weight": 0, 00:24:05.883 "nvme_adminq_poll_period_us": 10000, 00:24:05.883 "nvme_ioq_poll_period_us": 0, 00:24:05.883 "io_queue_requests": 512, 00:24:05.883 "delay_cmd_submit": true, 00:24:05.883 "transport_retry_count": 4, 00:24:05.883 "bdev_retry_count": 3, 00:24:05.883 "transport_ack_timeout": 0, 00:24:05.883 "ctrlr_loss_timeout_sec": 0, 00:24:05.883 "reconnect_delay_sec": 0, 00:24:05.883 "fast_io_fail_timeout_sec": 0, 00:24:05.883 "disable_auto_failback": false, 00:24:05.883 "generate_uuids": false, 00:24:05.883 "transport_tos": 0, 00:24:05.883 "nvme_error_stat": false, 00:24:05.883 "rdma_srq_size": 0, 00:24:05.883 "io_path_stat": false, 00:24:05.883 "allow_accel_sequence": false, 00:24:05.883 "rdma_max_cq_size": 0, 00:24:05.883 "rdma_cm_event_timeout_ms": 0, 00:24:05.883 "dhchap_digests": [ 00:24:05.883 "sha256", 00:24:05.883 "sha384", 00:24:05.883 "sha512" 00:24:05.883 ], 00:24:05.883 "dhchap_dhgroups": [ 00:24:05.883 "null", 00:24:05.883 "ffdhe2048", 00:24:05.883 "ffdhe3072", 00:24:05.883 "ffdhe4096", 00:24:05.883 "ffdhe6144", 00:24:05.883 "ffdhe8192" 00:24:05.883 ] 00:24:05.883 } 00:24:05.883 }, 00:24:05.883 { 00:24:05.883 "method": "bdev_nvme_attach_controller", 00:24:05.883 "params": { 00:24:05.883 "name": "nvme0", 00:24:05.883 "trtype": "TCP", 00:24:05.883 "adrfam": "IPv4", 00:24:05.883 "traddr": "127.0.0.1", 00:24:05.883 "trsvcid": "4420", 00:24:05.883 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:05.883 "prchk_reftag": false, 00:24:05.883 "prchk_guard": false, 00:24:05.883 "ctrlr_loss_timeout_sec": 0, 00:24:05.883 "reconnect_delay_sec": 0, 00:24:05.883 "fast_io_fail_timeout_sec": 0, 00:24:05.883 "psk": "key0", 00:24:05.883 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:05.883 "hdgst": false, 00:24:05.883 "ddgst": false, 00:24:05.883 "multipath": "multipath" 00:24:05.883 } 00:24:05.883 }, 00:24:05.883 { 00:24:05.883 "method": "bdev_nvme_set_hotplug", 00:24:05.883 "params": { 00:24:05.883 "period_us": 100000, 00:24:05.883 "enable": false 00:24:05.883 } 00:24:05.883 }, 00:24:05.883 { 00:24:05.883 "method": "bdev_wait_for_examine" 00:24:05.883 } 00:24:05.883 ] 00:24:05.883 }, 00:24:05.883 { 00:24:05.883 "subsystem": "nbd", 00:24:05.883 "config": [] 00:24:05.883 } 00:24:05.883 ] 00:24:05.883 }' 00:24:05.883 19:43:59 keyring_file -- keyring/file.sh@115 -- # killprocess 85228 00:24:05.883 19:43:59 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 85228 ']' 00:24:05.883 19:43:59 keyring_file -- common/autotest_common.sh@958 -- # kill -0 85228 00:24:05.883 19:43:59 keyring_file -- common/autotest_common.sh@959 -- # uname 00:24:05.883 19:43:59 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:05.883 19:43:59 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85228 00:24:05.883 19:43:59 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:05.883 19:43:59 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:05.883 killing process with pid 85228 00:24:05.883 19:43:59 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85228' 00:24:05.883 19:43:59 keyring_file -- common/autotest_common.sh@973 -- # kill 85228 00:24:05.883 Received shutdown signal, test time was about 1.000000 seconds 00:24:05.883 00:24:05.883 Latency(us) 00:24:05.883 [2024-12-05T19:43:59.320Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:05.883 [2024-12-05T19:43:59.320Z] =================================================================================================================== 00:24:05.883 [2024-12-05T19:43:59.320Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:05.883 19:43:59 keyring_file -- common/autotest_common.sh@978 -- # wait 85228 00:24:06.141 19:43:59 keyring_file -- keyring/file.sh@118 -- # bperfpid=85476 00:24:06.141 19:43:59 keyring_file -- keyring/file.sh@120 -- # waitforlisten 85476 /var/tmp/bperf.sock 00:24:06.141 19:43:59 keyring_file -- keyring/file.sh@116 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:24:06.141 19:43:59 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 85476 ']' 00:24:06.141 19:43:59 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:24:06.141 "subsystems": [ 00:24:06.141 { 00:24:06.141 "subsystem": "keyring", 00:24:06.141 "config": [ 00:24:06.141 { 00:24:06.141 "method": "keyring_file_add_key", 00:24:06.141 "params": { 00:24:06.141 "name": "key0", 00:24:06.141 "path": "/tmp/tmp.BsL6onWSmb" 00:24:06.141 } 00:24:06.141 }, 00:24:06.141 { 00:24:06.141 "method": "keyring_file_add_key", 00:24:06.141 "params": { 00:24:06.141 "name": "key1", 00:24:06.141 "path": "/tmp/tmp.6z055g7gJb" 00:24:06.141 } 00:24:06.141 } 00:24:06.141 ] 00:24:06.141 }, 00:24:06.141 { 00:24:06.141 "subsystem": "iobuf", 00:24:06.141 "config": [ 00:24:06.141 { 00:24:06.141 "method": "iobuf_set_options", 00:24:06.141 "params": { 00:24:06.141 "small_pool_count": 8192, 00:24:06.141 "large_pool_count": 1024, 00:24:06.141 "small_bufsize": 8192, 00:24:06.141 "large_bufsize": 135168, 00:24:06.141 "enable_numa": false 00:24:06.141 } 00:24:06.141 } 00:24:06.141 ] 00:24:06.141 }, 00:24:06.141 { 00:24:06.141 "subsystem": "sock", 00:24:06.141 "config": [ 00:24:06.141 { 00:24:06.141 "method": "sock_set_default_impl", 00:24:06.141 "params": { 00:24:06.141 "impl_name": "uring" 00:24:06.141 } 00:24:06.141 }, 00:24:06.141 { 00:24:06.141 "method": "sock_impl_set_options", 00:24:06.141 "params": { 00:24:06.141 "impl_name": "ssl", 00:24:06.141 "recv_buf_size": 4096, 00:24:06.141 "send_buf_size": 4096, 00:24:06.141 "enable_recv_pipe": true, 00:24:06.141 "enable_quickack": false, 00:24:06.141 "enable_placement_id": 0, 00:24:06.141 "enable_zerocopy_send_server": true, 00:24:06.141 "enable_zerocopy_send_client": false, 00:24:06.141 "zerocopy_threshold": 0, 00:24:06.141 "tls_version": 0, 00:24:06.141 "enable_ktls": false 00:24:06.141 } 00:24:06.141 }, 00:24:06.141 { 00:24:06.141 "method": "sock_impl_set_options", 00:24:06.141 "params": { 00:24:06.141 "impl_name": "posix", 00:24:06.141 "recv_buf_size": 2097152, 00:24:06.141 "send_buf_size": 2097152, 00:24:06.141 "enable_recv_pipe": true, 00:24:06.141 "enable_quickack": false, 00:24:06.141 "enable_placement_id": 0, 00:24:06.141 "enable_zerocopy_send_server": true, 00:24:06.141 "enable_zerocopy_send_client": false, 00:24:06.141 "zerocopy_threshold": 0, 00:24:06.141 "tls_version": 0, 00:24:06.141 "enable_ktls": false 00:24:06.141 } 00:24:06.141 }, 00:24:06.141 { 00:24:06.141 "method": "sock_impl_set_options", 00:24:06.141 "params": { 00:24:06.141 "impl_name": "uring", 00:24:06.141 "recv_buf_size": 2097152, 00:24:06.141 "send_buf_size": 2097152, 00:24:06.141 "enable_recv_pipe": true, 00:24:06.141 "enable_quickack": false, 00:24:06.141 "enable_placement_id": 0, 00:24:06.141 "enable_zerocopy_send_server": false, 00:24:06.141 "enable_zerocopy_send_client": false, 00:24:06.141 "zerocopy_threshold": 0, 00:24:06.141 "tls_version": 0, 00:24:06.141 "enable_ktls": false 00:24:06.141 } 00:24:06.141 } 00:24:06.141 ] 00:24:06.141 }, 00:24:06.141 { 00:24:06.141 "subsystem": "vmd", 00:24:06.141 "config": [] 00:24:06.141 }, 00:24:06.141 { 00:24:06.141 "subsystem": "accel", 00:24:06.141 "config": [ 00:24:06.141 { 00:24:06.141 "method": "accel_set_options", 00:24:06.141 "params": { 00:24:06.141 "small_cache_size": 128, 00:24:06.141 "large_cache_size": 16, 00:24:06.141 "task_count": 2048, 00:24:06.141 "sequence_count": 2048, 00:24:06.141 "buf_count": 2048 00:24:06.141 } 00:24:06.141 } 00:24:06.141 ] 00:24:06.141 }, 00:24:06.141 { 00:24:06.141 "subsystem": "bdev", 00:24:06.141 "config": [ 00:24:06.141 { 00:24:06.141 "method": "bdev_set_options", 00:24:06.141 "params": { 00:24:06.141 "bdev_io_pool_size": 65535, 00:24:06.141 "bdev_io_cache_size": 256, 00:24:06.141 "bdev_auto_examine": true, 00:24:06.141 "iobuf_small_cache_size": 128, 00:24:06.141 "iobuf_large_cache_size": 16 00:24:06.141 } 00:24:06.141 }, 00:24:06.141 { 00:24:06.141 "method": "bdev_raid_set_options", 00:24:06.141 "params": { 00:24:06.141 "process_window_size_kb": 1024, 00:24:06.141 "process_max_bandwidth_mb_sec": 0 00:24:06.141 } 00:24:06.141 }, 00:24:06.141 { 00:24:06.141 "method": "bdev_iscsi_set_options", 00:24:06.141 "params": { 00:24:06.141 "timeout_sec": 30 00:24:06.141 } 00:24:06.141 }, 00:24:06.141 { 00:24:06.141 "method": "bdev_nvme_set_options", 00:24:06.141 "params": { 00:24:06.141 "action_on_timeout": "none", 00:24:06.141 "timeout_us": 0, 00:24:06.141 "timeout_admin_us": 0, 00:24:06.141 "keep_alive_timeout_ms": 10000, 00:24:06.141 "arbitration_burst": 0, 00:24:06.141 "low_priority_weight": 0, 00:24:06.141 "medium_priority_weight": 0, 00:24:06.141 "high_priority_weight": 0, 00:24:06.141 "nvme_adminq_poll_period_us": 10000, 00:24:06.141 "nvme_ioq_poll_period_us": 0, 00:24:06.141 "io_queue_requests": 512, 00:24:06.141 "delay_cmd_submit": true, 00:24:06.141 "transport_retry_count": 4, 00:24:06.141 "bdev_retry_count": 3, 00:24:06.141 "transport_ack_timeout": 0, 00:24:06.141 "ctrlr_loss_timeout_sec": 0, 00:24:06.141 "reconnect_delay_sec": 0, 00:24:06.141 "fast_io_fail_timeout_sec": 0, 00:24:06.141 "disable_auto_failback": false, 00:24:06.141 "generate_uuids": false, 00:24:06.141 "transport_tos": 0, 00:24:06.141 "nvme_error_stat": false, 00:24:06.141 "rdma_srq_size": 0, 00:24:06.141 "io_path_stat": false, 00:24:06.141 "allow_accel_sequence": false, 00:24:06.141 "rdma_max_cq_size": 0, 00:24:06.141 "rdma_cm_event_timeout_ms": 0, 00:24:06.141 "dhchap_digests": [ 00:24:06.141 "sha256", 00:24:06.141 "sha384", 00:24:06.141 "sha512" 00:24:06.141 ], 00:24:06.141 "dhchap_dhgroups": [ 00:24:06.141 "null", 00:24:06.141 "ffdhe2048", 00:24:06.141 "ffdhe3072", 00:24:06.141 "ffdhe4096", 00:24:06.141 "ffdhe6144", 00:24:06.141 "ffdhe8192" 00:24:06.141 ] 00:24:06.141 } 00:24:06.141 }, 00:24:06.141 { 00:24:06.141 "method": "bdev_nvme_attach_controller", 00:24:06.141 "params": { 00:24:06.142 "name": "nvme0", 00:24:06.142 "trtype": "TCP", 00:24:06.142 "adrfam": "IPv4", 00:24:06.142 "traddr": "127.0.0.1", 00:24:06.142 "trsvcid": "4420", 00:24:06.142 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:06.142 "prchk_reftag": false, 00:24:06.142 "prchk_guard": false, 00:24:06.142 "ctrlr_loss_timeout_sec": 0, 00:24:06.142 "reconnect_delay_sec": 0, 00:24:06.142 "fast_io_fail_timeout_sec": 0, 00:24:06.142 "psk": "key0", 00:24:06.142 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:06.142 "hdgst": false, 00:24:06.142 "ddgst": false, 00:24:06.142 "multipath": "multipath" 00:24:06.142 } 00:24:06.142 }, 00:24:06.142 { 00:24:06.142 "method": "bdev_nvme_set_hotplug", 00:24:06.142 "params": { 00:24:06.142 "period_us": 100000, 00:24:06.142 "enable": false 00:24:06.142 } 00:24:06.142 }, 00:24:06.142 { 00:24:06.142 "method": "bdev_wait_for_examine" 00:24:06.142 } 00:24:06.142 ] 00:24:06.142 }, 00:24:06.142 { 00:24:06.142 "subsystem": "nbd", 00:24:06.142 "config": [] 00:24:06.142 } 00:24:06.142 ] 00:24:06.142 }' 00:24:06.142 19:43:59 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:06.142 19:43:59 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:06.142 19:43:59 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:06.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:06.142 19:43:59 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:06.142 19:43:59 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:24:06.142 [2024-12-05 19:43:59.397444] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:24:06.142 [2024-12-05 19:43:59.397550] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85476 ] 00:24:06.142 [2024-12-05 19:43:59.542795] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:06.399 [2024-12-05 19:43:59.604887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:06.399 [2024-12-05 19:43:59.743600] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:06.399 [2024-12-05 19:43:59.805666] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:06.964 19:44:00 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:06.964 19:44:00 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:24:06.964 19:44:00 keyring_file -- keyring/file.sh@121 -- # jq length 00:24:06.964 19:44:00 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:24:06.964 19:44:00 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:07.221 19:44:00 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:24:07.221 19:44:00 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:24:07.221 19:44:00 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:07.221 19:44:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:07.221 19:44:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:07.221 19:44:00 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:07.221 19:44:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:07.787 19:44:00 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:24:07.787 19:44:00 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:24:07.787 19:44:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:07.787 19:44:00 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:24:07.787 19:44:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:07.787 19:44:00 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:07.787 19:44:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:08.045 19:44:01 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:24:08.045 19:44:01 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:24:08.045 19:44:01 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:24:08.045 19:44:01 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:24:08.303 19:44:01 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:24:08.303 19:44:01 keyring_file -- keyring/file.sh@1 -- # cleanup 00:24:08.303 19:44:01 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.BsL6onWSmb /tmp/tmp.6z055g7gJb 00:24:08.303 19:44:01 keyring_file -- keyring/file.sh@20 -- # killprocess 85476 00:24:08.303 19:44:01 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 85476 ']' 00:24:08.303 19:44:01 keyring_file -- common/autotest_common.sh@958 -- # kill -0 85476 00:24:08.303 19:44:01 keyring_file -- common/autotest_common.sh@959 -- # uname 00:24:08.303 19:44:01 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:08.303 19:44:01 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85476 00:24:08.303 killing process with pid 85476 00:24:08.303 Received shutdown signal, test time was about 1.000000 seconds 00:24:08.303 00:24:08.303 Latency(us) 00:24:08.303 [2024-12-05T19:44:01.740Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:08.303 [2024-12-05T19:44:01.740Z] =================================================================================================================== 00:24:08.303 [2024-12-05T19:44:01.740Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:08.303 19:44:01 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:08.303 19:44:01 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:08.303 19:44:01 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85476' 00:24:08.303 19:44:01 keyring_file -- common/autotest_common.sh@973 -- # kill 85476 00:24:08.303 19:44:01 keyring_file -- common/autotest_common.sh@978 -- # wait 85476 00:24:08.560 19:44:01 keyring_file -- keyring/file.sh@21 -- # killprocess 85218 00:24:08.560 19:44:01 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 85218 ']' 00:24:08.560 19:44:01 keyring_file -- common/autotest_common.sh@958 -- # kill -0 85218 00:24:08.560 19:44:01 keyring_file -- common/autotest_common.sh@959 -- # uname 00:24:08.560 19:44:01 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:08.560 19:44:01 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85218 00:24:08.560 killing process with pid 85218 00:24:08.560 19:44:01 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:08.560 19:44:01 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:08.560 19:44:01 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85218' 00:24:08.560 19:44:01 keyring_file -- common/autotest_common.sh@973 -- # kill 85218 00:24:08.560 19:44:01 keyring_file -- common/autotest_common.sh@978 -- # wait 85218 00:24:08.818 00:24:08.818 real 0m15.376s 00:24:08.818 user 0m39.084s 00:24:08.818 sys 0m2.994s 00:24:08.818 ************************************ 00:24:08.818 END TEST keyring_file 00:24:08.818 ************************************ 00:24:08.818 19:44:02 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:08.818 19:44:02 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:24:08.818 19:44:02 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:24:08.818 19:44:02 -- spdk/autotest.sh@294 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:24:08.818 19:44:02 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:08.818 19:44:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:08.818 19:44:02 -- common/autotest_common.sh@10 -- # set +x 00:24:08.818 ************************************ 00:24:08.818 START TEST keyring_linux 00:24:08.818 ************************************ 00:24:08.818 19:44:02 keyring_linux -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:24:08.818 Joined session keyring: 80888990 00:24:09.077 * Looking for test storage... 00:24:09.077 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:24:09.077 19:44:02 keyring_linux -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:09.077 19:44:02 keyring_linux -- common/autotest_common.sh@1711 -- # lcov --version 00:24:09.077 19:44:02 keyring_linux -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:09.077 19:44:02 keyring_linux -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:09.077 19:44:02 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:09.077 19:44:02 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:09.077 19:44:02 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:09.077 19:44:02 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:24:09.077 19:44:02 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:24:09.077 19:44:02 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:24:09.077 19:44:02 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:24:09.077 19:44:02 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:24:09.077 19:44:02 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:24:09.077 19:44:02 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:24:09.077 19:44:02 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:09.077 19:44:02 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:24:09.077 19:44:02 keyring_linux -- scripts/common.sh@345 -- # : 1 00:24:09.077 19:44:02 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:09.077 19:44:02 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:09.077 19:44:02 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:24:09.077 19:44:02 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:24:09.077 19:44:02 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:09.077 19:44:02 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:24:09.077 19:44:02 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:24:09.077 19:44:02 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:24:09.077 19:44:02 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:24:09.077 19:44:02 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:09.077 19:44:02 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:24:09.077 19:44:02 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:24:09.077 19:44:02 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:09.077 19:44:02 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:09.077 19:44:02 keyring_linux -- scripts/common.sh@368 -- # return 0 00:24:09.077 19:44:02 keyring_linux -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:09.077 19:44:02 keyring_linux -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:09.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:09.077 --rc genhtml_branch_coverage=1 00:24:09.077 --rc genhtml_function_coverage=1 00:24:09.077 --rc genhtml_legend=1 00:24:09.077 --rc geninfo_all_blocks=1 00:24:09.077 --rc geninfo_unexecuted_blocks=1 00:24:09.077 00:24:09.077 ' 00:24:09.077 19:44:02 keyring_linux -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:09.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:09.077 --rc genhtml_branch_coverage=1 00:24:09.077 --rc genhtml_function_coverage=1 00:24:09.077 --rc genhtml_legend=1 00:24:09.077 --rc geninfo_all_blocks=1 00:24:09.077 --rc geninfo_unexecuted_blocks=1 00:24:09.077 00:24:09.077 ' 00:24:09.077 19:44:02 keyring_linux -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:09.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:09.077 --rc genhtml_branch_coverage=1 00:24:09.077 --rc genhtml_function_coverage=1 00:24:09.077 --rc genhtml_legend=1 00:24:09.077 --rc geninfo_all_blocks=1 00:24:09.077 --rc geninfo_unexecuted_blocks=1 00:24:09.077 00:24:09.077 ' 00:24:09.077 19:44:02 keyring_linux -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:09.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:09.077 --rc genhtml_branch_coverage=1 00:24:09.077 --rc genhtml_function_coverage=1 00:24:09.077 --rc genhtml_legend=1 00:24:09.077 --rc geninfo_all_blocks=1 00:24:09.077 --rc geninfo_unexecuted_blocks=1 00:24:09.077 00:24:09.077 ' 00:24:09.077 19:44:02 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:24:09.077 19:44:02 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:09.077 19:44:02 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:24:09.077 19:44:02 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:09.077 19:44:02 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:09.077 19:44:02 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:09.078 19:44:02 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:09.078 19:44:02 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:09.078 19:44:02 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:09.078 19:44:02 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:09.078 19:44:02 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:09.078 19:44:02 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:09.078 19:44:02 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:09.078 19:44:02 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:24:09.078 19:44:02 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=bf08626d-6d1d-4cb1-9877-bff179cd6dc1 00:24:09.078 19:44:02 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:09.078 19:44:02 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:09.078 19:44:02 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:09.078 19:44:02 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:09.078 19:44:02 keyring_linux -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:09.078 19:44:02 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:24:09.078 19:44:02 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:09.078 19:44:02 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:09.078 19:44:02 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:09.078 19:44:02 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.078 19:44:02 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.078 19:44:02 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.078 19:44:02 keyring_linux -- paths/export.sh@5 -- # export PATH 00:24:09.078 19:44:02 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.078 19:44:02 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:24:09.078 19:44:02 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:09.078 19:44:02 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:09.078 19:44:02 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:09.078 19:44:02 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:09.078 19:44:02 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:09.078 19:44:02 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:09.078 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:09.078 19:44:02 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:09.078 19:44:02 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:09.078 19:44:02 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:09.078 19:44:02 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:24:09.078 19:44:02 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:24:09.078 19:44:02 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:24:09.078 19:44:02 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:24:09.078 19:44:02 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:24:09.078 19:44:02 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:24:09.078 19:44:02 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:24:09.078 19:44:02 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:24:09.078 19:44:02 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:24:09.078 19:44:02 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:24:09.078 19:44:02 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:24:09.078 19:44:02 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:24:09.078 19:44:02 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:24:09.078 19:44:02 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:24:09.078 19:44:02 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:24:09.078 19:44:02 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:24:09.078 19:44:02 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:24:09.078 19:44:02 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:24:09.078 19:44:02 keyring_linux -- nvmf/common.sh@733 -- # python - 00:24:09.078 19:44:02 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:24:09.078 /tmp/:spdk-test:key0 00:24:09.078 19:44:02 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:24:09.078 19:44:02 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:24:09.078 19:44:02 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:24:09.078 19:44:02 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:24:09.078 19:44:02 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:24:09.078 19:44:02 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:24:09.078 19:44:02 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:24:09.078 19:44:02 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:24:09.078 19:44:02 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:24:09.078 19:44:02 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:24:09.078 19:44:02 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:24:09.078 19:44:02 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:24:09.078 19:44:02 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:24:09.078 19:44:02 keyring_linux -- nvmf/common.sh@733 -- # python - 00:24:09.336 19:44:02 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:24:09.336 /tmp/:spdk-test:key1 00:24:09.336 19:44:02 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:24:09.336 19:44:02 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=85598 00:24:09.336 19:44:02 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:09.336 19:44:02 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 85598 00:24:09.336 19:44:02 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 85598 ']' 00:24:09.336 19:44:02 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:09.336 19:44:02 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:09.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:09.336 19:44:02 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:09.336 19:44:02 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:09.336 19:44:02 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:24:09.336 [2024-12-05 19:44:02.610203] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:24:09.336 [2024-12-05 19:44:02.610346] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85598 ] 00:24:09.336 [2024-12-05 19:44:02.758599] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:09.594 [2024-12-05 19:44:02.809089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:09.594 [2024-12-05 19:44:02.880538] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:09.852 19:44:03 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:09.852 19:44:03 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:24:09.852 19:44:03 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:24:09.852 19:44:03 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.852 19:44:03 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:24:09.852 [2024-12-05 19:44:03.079610] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:09.852 null0 00:24:09.852 [2024-12-05 19:44:03.111552] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:09.852 [2024-12-05 19:44:03.111739] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:24:09.852 19:44:03 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.852 19:44:03 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:24:09.852 766182704 00:24:09.852 19:44:03 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:24:09.852 729034795 00:24:09.852 19:44:03 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=85614 00:24:09.852 19:44:03 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:24:09.852 19:44:03 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 85614 /var/tmp/bperf.sock 00:24:09.852 19:44:03 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 85614 ']' 00:24:09.852 19:44:03 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:09.852 19:44:03 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:09.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:09.852 19:44:03 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:09.852 19:44:03 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:09.852 19:44:03 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:24:09.852 [2024-12-05 19:44:03.195058] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:24:09.852 [2024-12-05 19:44:03.195191] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85614 ] 00:24:10.111 [2024-12-05 19:44:03.342982] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:10.111 [2024-12-05 19:44:03.405277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:10.111 19:44:03 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:10.112 19:44:03 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:24:10.112 19:44:03 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:24:10.112 19:44:03 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:24:10.371 19:44:03 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:24:10.371 19:44:03 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:10.653 [2024-12-05 19:44:04.057422] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:10.913 19:44:04 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:24:10.913 19:44:04 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:24:11.172 [2024-12-05 19:44:04.350347] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:11.172 nvme0n1 00:24:11.172 19:44:04 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:24:11.172 19:44:04 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:24:11.172 19:44:04 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:24:11.172 19:44:04 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:24:11.172 19:44:04 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:11.172 19:44:04 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:24:11.431 19:44:04 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:24:11.431 19:44:04 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:24:11.431 19:44:04 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:24:11.431 19:44:04 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:24:11.431 19:44:04 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:11.431 19:44:04 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:24:11.431 19:44:04 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:11.692 19:44:04 keyring_linux -- keyring/linux.sh@25 -- # sn=766182704 00:24:11.692 19:44:04 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:24:11.692 19:44:04 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:24:11.692 19:44:04 keyring_linux -- keyring/linux.sh@26 -- # [[ 766182704 == \7\6\6\1\8\2\7\0\4 ]] 00:24:11.692 19:44:04 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 766182704 00:24:11.692 19:44:04 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:24:11.692 19:44:04 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:11.692 Running I/O for 1 seconds... 00:24:13.066 13578.00 IOPS, 53.04 MiB/s 00:24:13.066 Latency(us) 00:24:13.066 [2024-12-05T19:44:06.503Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:13.066 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:24:13.066 nvme0n1 : 1.01 13589.71 53.08 0.00 0.00 9373.33 7030.23 17992.61 00:24:13.066 [2024-12-05T19:44:06.503Z] =================================================================================================================== 00:24:13.066 [2024-12-05T19:44:06.503Z] Total : 13589.71 53.08 0.00 0.00 9373.33 7030.23 17992.61 00:24:13.066 { 00:24:13.066 "results": [ 00:24:13.066 { 00:24:13.066 "job": "nvme0n1", 00:24:13.066 "core_mask": "0x2", 00:24:13.066 "workload": "randread", 00:24:13.066 "status": "finished", 00:24:13.066 "queue_depth": 128, 00:24:13.066 "io_size": 4096, 00:24:13.066 "runtime": 1.008631, 00:24:13.066 "iops": 13589.707236838844, 00:24:13.066 "mibps": 53.08479389390173, 00:24:13.066 "io_failed": 0, 00:24:13.066 "io_timeout": 0, 00:24:13.066 "avg_latency_us": 9373.32642087321, 00:24:13.066 "min_latency_us": 7030.225454545454, 00:24:13.066 "max_latency_us": 17992.61090909091 00:24:13.066 } 00:24:13.066 ], 00:24:13.066 "core_count": 1 00:24:13.066 } 00:24:13.066 19:44:06 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:24:13.066 19:44:06 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:24:13.066 19:44:06 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:24:13.066 19:44:06 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:24:13.066 19:44:06 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:24:13.066 19:44:06 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:24:13.066 19:44:06 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:13.066 19:44:06 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:24:13.324 19:44:06 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:24:13.324 19:44:06 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:24:13.324 19:44:06 keyring_linux -- keyring/linux.sh@23 -- # return 00:24:13.324 19:44:06 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:24:13.324 19:44:06 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:24:13.324 19:44:06 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:24:13.324 19:44:06 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:24:13.324 19:44:06 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:13.324 19:44:06 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:24:13.324 19:44:06 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:13.324 19:44:06 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:24:13.324 19:44:06 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:24:13.633 [2024-12-05 19:44:06.965783] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:13.633 [2024-12-05 19:44:06.966323] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18445d0 (107): Transport endpoint is not connected 00:24:13.633 [2024-12-05 19:44:06.967313] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18445d0 (9): Bad file descriptor 00:24:13.633 [2024-12-05 19:44:06.968312] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:24:13.633 [2024-12-05 19:44:06.968333] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:24:13.633 [2024-12-05 19:44:06.968344] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:24:13.633 [2024-12-05 19:44:06.968356] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:24:13.633 request: 00:24:13.633 { 00:24:13.633 "name": "nvme0", 00:24:13.633 "trtype": "tcp", 00:24:13.633 "traddr": "127.0.0.1", 00:24:13.633 "adrfam": "ipv4", 00:24:13.633 "trsvcid": "4420", 00:24:13.633 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:13.633 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:13.633 "prchk_reftag": false, 00:24:13.633 "prchk_guard": false, 00:24:13.633 "hdgst": false, 00:24:13.633 "ddgst": false, 00:24:13.633 "psk": ":spdk-test:key1", 00:24:13.633 "allow_unrecognized_csi": false, 00:24:13.633 "method": "bdev_nvme_attach_controller", 00:24:13.633 "req_id": 1 00:24:13.633 } 00:24:13.633 Got JSON-RPC error response 00:24:13.633 response: 00:24:13.633 { 00:24:13.633 "code": -5, 00:24:13.633 "message": "Input/output error" 00:24:13.633 } 00:24:13.633 19:44:06 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:24:13.633 19:44:06 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:13.633 19:44:06 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:13.633 19:44:06 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:13.633 19:44:06 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:24:13.633 19:44:06 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:24:13.633 19:44:06 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:24:13.633 19:44:06 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:24:13.633 19:44:06 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:24:13.633 19:44:06 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:24:13.633 19:44:06 keyring_linux -- keyring/linux.sh@33 -- # sn=766182704 00:24:13.633 19:44:06 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 766182704 00:24:13.633 1 links removed 00:24:13.633 19:44:06 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:24:13.633 19:44:06 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:24:13.633 19:44:06 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:24:13.633 19:44:06 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:24:13.633 19:44:06 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:24:13.633 19:44:07 keyring_linux -- keyring/linux.sh@33 -- # sn=729034795 00:24:13.633 19:44:07 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 729034795 00:24:13.633 1 links removed 00:24:13.633 19:44:07 keyring_linux -- keyring/linux.sh@41 -- # killprocess 85614 00:24:13.633 19:44:07 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 85614 ']' 00:24:13.633 19:44:07 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 85614 00:24:13.633 19:44:07 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:24:13.633 19:44:07 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:13.633 19:44:07 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85614 00:24:13.633 killing process with pid 85614 00:24:13.633 Received shutdown signal, test time was about 1.000000 seconds 00:24:13.633 00:24:13.633 Latency(us) 00:24:13.633 [2024-12-05T19:44:07.070Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:13.633 [2024-12-05T19:44:07.070Z] =================================================================================================================== 00:24:13.633 [2024-12-05T19:44:07.070Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:13.633 19:44:07 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:13.633 19:44:07 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:13.633 19:44:07 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85614' 00:24:13.633 19:44:07 keyring_linux -- common/autotest_common.sh@973 -- # kill 85614 00:24:13.633 19:44:07 keyring_linux -- common/autotest_common.sh@978 -- # wait 85614 00:24:13.893 19:44:07 keyring_linux -- keyring/linux.sh@42 -- # killprocess 85598 00:24:13.893 19:44:07 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 85598 ']' 00:24:13.893 19:44:07 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 85598 00:24:13.893 19:44:07 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:24:13.893 19:44:07 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:13.893 19:44:07 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85598 00:24:13.893 killing process with pid 85598 00:24:13.893 19:44:07 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:13.893 19:44:07 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:13.893 19:44:07 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85598' 00:24:13.893 19:44:07 keyring_linux -- common/autotest_common.sh@973 -- # kill 85598 00:24:13.893 19:44:07 keyring_linux -- common/autotest_common.sh@978 -- # wait 85598 00:24:14.460 00:24:14.460 real 0m5.449s 00:24:14.460 user 0m10.713s 00:24:14.460 sys 0m1.557s 00:24:14.460 19:44:07 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:14.460 19:44:07 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:24:14.460 ************************************ 00:24:14.460 END TEST keyring_linux 00:24:14.460 ************************************ 00:24:14.460 19:44:07 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:24:14.460 19:44:07 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:24:14.460 19:44:07 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:24:14.460 19:44:07 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:24:14.460 19:44:07 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:24:14.460 19:44:07 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:24:14.460 19:44:07 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:24:14.460 19:44:07 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:24:14.460 19:44:07 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:24:14.460 19:44:07 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:24:14.460 19:44:07 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:24:14.460 19:44:07 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:24:14.460 19:44:07 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:24:14.460 19:44:07 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:24:14.460 19:44:07 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:24:14.460 19:44:07 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:24:14.460 19:44:07 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:24:14.460 19:44:07 -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:14.460 19:44:07 -- common/autotest_common.sh@10 -- # set +x 00:24:14.460 19:44:07 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:24:14.460 19:44:07 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:24:14.460 19:44:07 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:24:14.460 19:44:07 -- common/autotest_common.sh@10 -- # set +x 00:24:16.363 INFO: APP EXITING 00:24:16.363 INFO: killing all VMs 00:24:16.363 INFO: killing vhost app 00:24:16.363 INFO: EXIT DONE 00:24:16.932 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:16.932 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:24:16.932 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:24:17.867 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:17.867 Cleaning 00:24:17.867 Removing: /var/run/dpdk/spdk0/config 00:24:17.867 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:24:17.867 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:24:17.867 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:24:17.867 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:24:17.867 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:24:17.867 Removing: /var/run/dpdk/spdk0/hugepage_info 00:24:17.867 Removing: /var/run/dpdk/spdk1/config 00:24:17.867 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:24:17.867 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:24:17.867 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:24:17.867 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:24:17.867 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:24:17.867 Removing: /var/run/dpdk/spdk1/hugepage_info 00:24:17.867 Removing: /var/run/dpdk/spdk2/config 00:24:17.867 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:24:17.867 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:24:17.867 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:24:17.867 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:24:17.867 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:24:17.867 Removing: /var/run/dpdk/spdk2/hugepage_info 00:24:17.867 Removing: /var/run/dpdk/spdk3/config 00:24:17.867 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:24:17.867 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:24:17.867 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:24:17.867 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:24:17.867 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:24:17.867 Removing: /var/run/dpdk/spdk3/hugepage_info 00:24:17.867 Removing: /var/run/dpdk/spdk4/config 00:24:17.867 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:24:17.867 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:24:17.867 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:24:17.867 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:24:17.868 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:24:17.868 Removing: /var/run/dpdk/spdk4/hugepage_info 00:24:17.868 Removing: /dev/shm/nvmf_trace.0 00:24:17.868 Removing: /dev/shm/spdk_tgt_trace.pid56712 00:24:17.868 Removing: /var/run/dpdk/spdk0 00:24:17.868 Removing: /var/run/dpdk/spdk1 00:24:17.868 Removing: /var/run/dpdk/spdk2 00:24:17.868 Removing: /var/run/dpdk/spdk3 00:24:17.868 Removing: /var/run/dpdk/spdk4 00:24:17.868 Removing: /var/run/dpdk/spdk_pid56554 00:24:17.868 Removing: /var/run/dpdk/spdk_pid56712 00:24:17.868 Removing: /var/run/dpdk/spdk_pid56910 00:24:17.868 Removing: /var/run/dpdk/spdk_pid56997 00:24:17.868 Removing: /var/run/dpdk/spdk_pid57017 00:24:17.868 Removing: /var/run/dpdk/spdk_pid57126 00:24:17.868 Removing: /var/run/dpdk/spdk_pid57137 00:24:17.868 Removing: /var/run/dpdk/spdk_pid57276 00:24:17.868 Removing: /var/run/dpdk/spdk_pid57472 00:24:17.868 Removing: /var/run/dpdk/spdk_pid57626 00:24:17.868 Removing: /var/run/dpdk/spdk_pid57702 00:24:17.868 Removing: /var/run/dpdk/spdk_pid57782 00:24:17.868 Removing: /var/run/dpdk/spdk_pid57886 00:24:17.868 Removing: /var/run/dpdk/spdk_pid57959 00:24:17.868 Removing: /var/run/dpdk/spdk_pid57992 00:24:17.868 Removing: /var/run/dpdk/spdk_pid58028 00:24:17.868 Removing: /var/run/dpdk/spdk_pid58097 00:24:17.868 Removing: /var/run/dpdk/spdk_pid58202 00:24:17.868 Removing: /var/run/dpdk/spdk_pid58642 00:24:17.868 Removing: /var/run/dpdk/spdk_pid58694 00:24:17.868 Removing: /var/run/dpdk/spdk_pid58737 00:24:17.868 Removing: /var/run/dpdk/spdk_pid58746 00:24:17.868 Removing: /var/run/dpdk/spdk_pid58813 00:24:17.868 Removing: /var/run/dpdk/spdk_pid58821 00:24:17.868 Removing: /var/run/dpdk/spdk_pid58888 00:24:17.868 Removing: /var/run/dpdk/spdk_pid58897 00:24:17.868 Removing: /var/run/dpdk/spdk_pid58942 00:24:17.868 Removing: /var/run/dpdk/spdk_pid58960 00:24:17.868 Removing: /var/run/dpdk/spdk_pid59006 00:24:17.868 Removing: /var/run/dpdk/spdk_pid59024 00:24:17.868 Removing: /var/run/dpdk/spdk_pid59154 00:24:17.868 Removing: /var/run/dpdk/spdk_pid59190 00:24:17.868 Removing: /var/run/dpdk/spdk_pid59271 00:24:17.868 Removing: /var/run/dpdk/spdk_pid59599 00:24:17.868 Removing: /var/run/dpdk/spdk_pid59615 00:24:17.868 Removing: /var/run/dpdk/spdk_pid59653 00:24:17.868 Removing: /var/run/dpdk/spdk_pid59661 00:24:17.868 Removing: /var/run/dpdk/spdk_pid59682 00:24:17.868 Removing: /var/run/dpdk/spdk_pid59701 00:24:17.868 Removing: /var/run/dpdk/spdk_pid59720 00:24:17.868 Removing: /var/run/dpdk/spdk_pid59730 00:24:17.868 Removing: /var/run/dpdk/spdk_pid59749 00:24:17.868 Removing: /var/run/dpdk/spdk_pid59768 00:24:17.868 Removing: /var/run/dpdk/spdk_pid59789 00:24:17.868 Removing: /var/run/dpdk/spdk_pid59808 00:24:17.868 Removing: /var/run/dpdk/spdk_pid59816 00:24:17.868 Removing: /var/run/dpdk/spdk_pid59837 00:24:17.868 Removing: /var/run/dpdk/spdk_pid59856 00:24:17.868 Removing: /var/run/dpdk/spdk_pid59875 00:24:17.868 Removing: /var/run/dpdk/spdk_pid59885 00:24:17.868 Removing: /var/run/dpdk/spdk_pid59904 00:24:17.868 Removing: /var/run/dpdk/spdk_pid59923 00:24:18.127 Removing: /var/run/dpdk/spdk_pid59943 00:24:18.127 Removing: /var/run/dpdk/spdk_pid59969 00:24:18.127 Removing: /var/run/dpdk/spdk_pid59988 00:24:18.127 Removing: /var/run/dpdk/spdk_pid60014 00:24:18.127 Removing: /var/run/dpdk/spdk_pid60086 00:24:18.127 Removing: /var/run/dpdk/spdk_pid60120 00:24:18.127 Removing: /var/run/dpdk/spdk_pid60124 00:24:18.127 Removing: /var/run/dpdk/spdk_pid60158 00:24:18.127 Removing: /var/run/dpdk/spdk_pid60166 00:24:18.127 Removing: /var/run/dpdk/spdk_pid60176 00:24:18.127 Removing: /var/run/dpdk/spdk_pid60219 00:24:18.127 Removing: /var/run/dpdk/spdk_pid60232 00:24:18.127 Removing: /var/run/dpdk/spdk_pid60266 00:24:18.127 Removing: /var/run/dpdk/spdk_pid60270 00:24:18.127 Removing: /var/run/dpdk/spdk_pid60285 00:24:18.127 Removing: /var/run/dpdk/spdk_pid60289 00:24:18.127 Removing: /var/run/dpdk/spdk_pid60304 00:24:18.127 Removing: /var/run/dpdk/spdk_pid60308 00:24:18.127 Removing: /var/run/dpdk/spdk_pid60325 00:24:18.127 Removing: /var/run/dpdk/spdk_pid60329 00:24:18.127 Removing: /var/run/dpdk/spdk_pid60363 00:24:18.127 Removing: /var/run/dpdk/spdk_pid60390 00:24:18.127 Removing: /var/run/dpdk/spdk_pid60401 00:24:18.127 Removing: /var/run/dpdk/spdk_pid60430 00:24:18.127 Removing: /var/run/dpdk/spdk_pid60439 00:24:18.127 Removing: /var/run/dpdk/spdk_pid60452 00:24:18.127 Removing: /var/run/dpdk/spdk_pid60493 00:24:18.127 Removing: /var/run/dpdk/spdk_pid60504 00:24:18.127 Removing: /var/run/dpdk/spdk_pid60531 00:24:18.127 Removing: /var/run/dpdk/spdk_pid60538 00:24:18.127 Removing: /var/run/dpdk/spdk_pid60546 00:24:18.127 Removing: /var/run/dpdk/spdk_pid60559 00:24:18.127 Removing: /var/run/dpdk/spdk_pid60566 00:24:18.127 Removing: /var/run/dpdk/spdk_pid60574 00:24:18.127 Removing: /var/run/dpdk/spdk_pid60581 00:24:18.127 Removing: /var/run/dpdk/spdk_pid60589 00:24:18.127 Removing: /var/run/dpdk/spdk_pid60671 00:24:18.127 Removing: /var/run/dpdk/spdk_pid60718 00:24:18.127 Removing: /var/run/dpdk/spdk_pid60831 00:24:18.127 Removing: /var/run/dpdk/spdk_pid60870 00:24:18.127 Removing: /var/run/dpdk/spdk_pid60915 00:24:18.127 Removing: /var/run/dpdk/spdk_pid60928 00:24:18.127 Removing: /var/run/dpdk/spdk_pid60946 00:24:18.127 Removing: /var/run/dpdk/spdk_pid60966 00:24:18.127 Removing: /var/run/dpdk/spdk_pid61003 00:24:18.127 Removing: /var/run/dpdk/spdk_pid61013 00:24:18.127 Removing: /var/run/dpdk/spdk_pid61091 00:24:18.127 Removing: /var/run/dpdk/spdk_pid61118 00:24:18.127 Removing: /var/run/dpdk/spdk_pid61162 00:24:18.127 Removing: /var/run/dpdk/spdk_pid61231 00:24:18.127 Removing: /var/run/dpdk/spdk_pid61293 00:24:18.127 Removing: /var/run/dpdk/spdk_pid61318 00:24:18.127 Removing: /var/run/dpdk/spdk_pid61416 00:24:18.127 Removing: /var/run/dpdk/spdk_pid61464 00:24:18.127 Removing: /var/run/dpdk/spdk_pid61497 00:24:18.127 Removing: /var/run/dpdk/spdk_pid61729 00:24:18.127 Removing: /var/run/dpdk/spdk_pid61826 00:24:18.127 Removing: /var/run/dpdk/spdk_pid61855 00:24:18.127 Removing: /var/run/dpdk/spdk_pid61879 00:24:18.127 Removing: /var/run/dpdk/spdk_pid61918 00:24:18.127 Removing: /var/run/dpdk/spdk_pid61951 00:24:18.127 Removing: /var/run/dpdk/spdk_pid61985 00:24:18.127 Removing: /var/run/dpdk/spdk_pid62022 00:24:18.127 Removing: /var/run/dpdk/spdk_pid62402 00:24:18.127 Removing: /var/run/dpdk/spdk_pid62444 00:24:18.127 Removing: /var/run/dpdk/spdk_pid62792 00:24:18.127 Removing: /var/run/dpdk/spdk_pid63252 00:24:18.127 Removing: /var/run/dpdk/spdk_pid63526 00:24:18.127 Removing: /var/run/dpdk/spdk_pid64407 00:24:18.127 Removing: /var/run/dpdk/spdk_pid65323 00:24:18.127 Removing: /var/run/dpdk/spdk_pid65445 00:24:18.127 Removing: /var/run/dpdk/spdk_pid65514 00:24:18.127 Removing: /var/run/dpdk/spdk_pid66935 00:24:18.127 Removing: /var/run/dpdk/spdk_pid67238 00:24:18.127 Removing: /var/run/dpdk/spdk_pid70933 00:24:18.127 Removing: /var/run/dpdk/spdk_pid71296 00:24:18.127 Removing: /var/run/dpdk/spdk_pid71405 00:24:18.127 Removing: /var/run/dpdk/spdk_pid71540 00:24:18.127 Removing: /var/run/dpdk/spdk_pid71574 00:24:18.127 Removing: /var/run/dpdk/spdk_pid71611 00:24:18.127 Removing: /var/run/dpdk/spdk_pid71641 00:24:18.127 Removing: /var/run/dpdk/spdk_pid71733 00:24:18.127 Removing: /var/run/dpdk/spdk_pid71861 00:24:18.127 Removing: /var/run/dpdk/spdk_pid72016 00:24:18.127 Removing: /var/run/dpdk/spdk_pid72103 00:24:18.127 Removing: /var/run/dpdk/spdk_pid72297 00:24:18.127 Removing: /var/run/dpdk/spdk_pid72373 00:24:18.386 Removing: /var/run/dpdk/spdk_pid72457 00:24:18.386 Removing: /var/run/dpdk/spdk_pid72818 00:24:18.386 Removing: /var/run/dpdk/spdk_pid73224 00:24:18.386 Removing: /var/run/dpdk/spdk_pid73225 00:24:18.386 Removing: /var/run/dpdk/spdk_pid73226 00:24:18.386 Removing: /var/run/dpdk/spdk_pid73485 00:24:18.386 Removing: /var/run/dpdk/spdk_pid73745 00:24:18.386 Removing: /var/run/dpdk/spdk_pid74133 00:24:18.386 Removing: /var/run/dpdk/spdk_pid74139 00:24:18.386 Removing: /var/run/dpdk/spdk_pid74465 00:24:18.386 Removing: /var/run/dpdk/spdk_pid74480 00:24:18.386 Removing: /var/run/dpdk/spdk_pid74500 00:24:18.386 Removing: /var/run/dpdk/spdk_pid74525 00:24:18.386 Removing: /var/run/dpdk/spdk_pid74531 00:24:18.386 Removing: /var/run/dpdk/spdk_pid74890 00:24:18.386 Removing: /var/run/dpdk/spdk_pid74939 00:24:18.386 Removing: /var/run/dpdk/spdk_pid75266 00:24:18.386 Removing: /var/run/dpdk/spdk_pid75463 00:24:18.386 Removing: /var/run/dpdk/spdk_pid75889 00:24:18.386 Removing: /var/run/dpdk/spdk_pid76431 00:24:18.386 Removing: /var/run/dpdk/spdk_pid77313 00:24:18.386 Removing: /var/run/dpdk/spdk_pid77939 00:24:18.386 Removing: /var/run/dpdk/spdk_pid77947 00:24:18.386 Removing: /var/run/dpdk/spdk_pid79970 00:24:18.386 Removing: /var/run/dpdk/spdk_pid80023 00:24:18.386 Removing: /var/run/dpdk/spdk_pid80081 00:24:18.386 Removing: /var/run/dpdk/spdk_pid80129 00:24:18.386 Removing: /var/run/dpdk/spdk_pid80230 00:24:18.386 Removing: /var/run/dpdk/spdk_pid80290 00:24:18.386 Removing: /var/run/dpdk/spdk_pid80352 00:24:18.386 Removing: /var/run/dpdk/spdk_pid80407 00:24:18.386 Removing: /var/run/dpdk/spdk_pid80774 00:24:18.386 Removing: /var/run/dpdk/spdk_pid81987 00:24:18.386 Removing: /var/run/dpdk/spdk_pid82133 00:24:18.386 Removing: /var/run/dpdk/spdk_pid82368 00:24:18.386 Removing: /var/run/dpdk/spdk_pid82957 00:24:18.386 Removing: /var/run/dpdk/spdk_pid83117 00:24:18.386 Removing: /var/run/dpdk/spdk_pid83274 00:24:18.386 Removing: /var/run/dpdk/spdk_pid83372 00:24:18.386 Removing: /var/run/dpdk/spdk_pid83533 00:24:18.386 Removing: /var/run/dpdk/spdk_pid83642 00:24:18.386 Removing: /var/run/dpdk/spdk_pid84353 00:24:18.386 Removing: /var/run/dpdk/spdk_pid84384 00:24:18.386 Removing: /var/run/dpdk/spdk_pid84419 00:24:18.386 Removing: /var/run/dpdk/spdk_pid84679 00:24:18.386 Removing: /var/run/dpdk/spdk_pid84715 00:24:18.386 Removing: /var/run/dpdk/spdk_pid84746 00:24:18.386 Removing: /var/run/dpdk/spdk_pid85218 00:24:18.386 Removing: /var/run/dpdk/spdk_pid85228 00:24:18.386 Removing: /var/run/dpdk/spdk_pid85476 00:24:18.386 Removing: /var/run/dpdk/spdk_pid85598 00:24:18.386 Removing: /var/run/dpdk/spdk_pid85614 00:24:18.386 Clean 00:24:18.386 19:44:11 -- common/autotest_common.sh@1453 -- # return 0 00:24:18.386 19:44:11 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:24:18.386 19:44:11 -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:18.386 19:44:11 -- common/autotest_common.sh@10 -- # set +x 00:24:18.386 19:44:11 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:24:18.386 19:44:11 -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:18.386 19:44:11 -- common/autotest_common.sh@10 -- # set +x 00:24:18.646 19:44:11 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:24:18.646 19:44:11 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:24:18.646 19:44:11 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:24:18.646 19:44:11 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:24:18.646 19:44:11 -- spdk/autotest.sh@398 -- # hostname 00:24:18.646 19:44:11 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:24:18.906 geninfo: WARNING: invalid characters removed from testname! 00:24:45.624 19:44:36 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:24:46.192 19:44:39 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:24:48.729 19:44:42 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:24:52.021 19:44:44 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:24:54.554 19:44:47 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:24:57.086 19:44:50 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:24:59.683 19:44:52 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:24:59.683 19:44:52 -- spdk/autorun.sh@1 -- $ timing_finish 00:24:59.683 19:44:52 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:24:59.683 19:44:52 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:24:59.683 19:44:52 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:24:59.683 19:44:52 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:24:59.683 + [[ -n 5195 ]] 00:24:59.683 + sudo kill 5195 00:24:59.705 [Pipeline] } 00:24:59.718 [Pipeline] // timeout 00:24:59.722 [Pipeline] } 00:24:59.736 [Pipeline] // stage 00:24:59.741 [Pipeline] } 00:24:59.755 [Pipeline] // catchError 00:24:59.764 [Pipeline] stage 00:24:59.766 [Pipeline] { (Stop VM) 00:24:59.778 [Pipeline] sh 00:25:00.059 + vagrant halt 00:25:03.349 ==> default: Halting domain... 00:25:08.634 [Pipeline] sh 00:25:08.914 + vagrant destroy -f 00:25:12.203 ==> default: Removing domain... 00:25:12.213 [Pipeline] sh 00:25:12.493 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/output 00:25:12.501 [Pipeline] } 00:25:12.518 [Pipeline] // stage 00:25:12.524 [Pipeline] } 00:25:12.541 [Pipeline] // dir 00:25:12.545 [Pipeline] } 00:25:12.558 [Pipeline] // wrap 00:25:12.564 [Pipeline] } 00:25:12.577 [Pipeline] // catchError 00:25:12.587 [Pipeline] stage 00:25:12.589 [Pipeline] { (Epilogue) 00:25:12.602 [Pipeline] sh 00:25:12.883 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:25:19.489 [Pipeline] catchError 00:25:19.491 [Pipeline] { 00:25:19.502 [Pipeline] sh 00:25:19.781 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:25:19.782 Artifacts sizes are good 00:25:19.796 [Pipeline] } 00:25:19.812 [Pipeline] // catchError 00:25:19.823 [Pipeline] archiveArtifacts 00:25:19.830 Archiving artifacts 00:25:19.952 [Pipeline] cleanWs 00:25:19.964 [WS-CLEANUP] Deleting project workspace... 00:25:19.964 [WS-CLEANUP] Deferred wipeout is used... 00:25:19.970 [WS-CLEANUP] done 00:25:19.973 [Pipeline] } 00:25:19.988 [Pipeline] // stage 00:25:19.993 [Pipeline] } 00:25:20.008 [Pipeline] // node 00:25:20.014 [Pipeline] End of Pipeline 00:25:20.051 Finished: SUCCESS